How can I best create a data volume in local-lvm (pve) for an Ubuntu client to mount?

gctwnl

Member
Aug 24, 2022
63
4
8
Newbie here (obviously).

I've got a very simple PVE setup with a 1TB internal SSD set up in a default mode, so my Summary says something like:
Code:
100 (ubuntu-vm-name)
local (pve)
local-lvm (pve)

local (pve) is 100GB (LVM) and holds the Debian Linux with PVE and the Ubuntu iso, local-lvm (pve) (LVM-Thin) is the rest (850GB) and holds the vm-100-disk-0 disk image (root disk for the Ubuntu client). The rest of that 850GB is currently unused.

I would like to create a volume in that 850GB area and mount that in my Ubuntu client.

On pve in a shell, I can see:

Code:
root@pve:~# lsblk
NAME                                          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                             8:0    0   1.7T  0 disk  
└─sda1                                          8:1    0   1.7T  0 part  
  └─luks-fa1483bd-f599-4dcf-9732-c09069472150 253:7    0   1.7T  0 crypt 
    ├─rna--mepdm--1-vm--100--disk--0          253:8    0   500G  0 lvm   
    └─rna--mepdm--1-rna--pbs--mepdm--1        253:9    0   500G  0 lvm   /mnt/pbs-backup-1
nvme0n1                                       259:0    0 931.5G  0 disk  
├─nvme0n1p1                                   259:1    0  1007K  0 part  
├─nvme0n1p2                                   259:2    0   512M  0 part  /boot/efi
└─nvme0n1p3                                   259:3    0   931G  0 part  
  ├─pve-swap                                  253:0    0     8G  0 lvm   [SWAP]
  ├─pve-root                                  253:1    0    96G  0 lvm   /
  ├─pve-data_tmeta                            253:2    0   8.1G  0 lvm   
  │ └─pve-data-tpool                          253:4    0 794.8G  0 lvm   
  │   ├─pve-data                              253:5    0 794.8G  1 lvm   
  │   └─pve-vm--100--disk--0                  253:6    0    32G  0 lvm   
  └─pve-data_tdata                            253:3    0 794.8G  0 lvm   
    └─pve-data-tpool                          253:4    0 794.8G  0 lvm   
      ├─pve-data                              253:5    0 794.8G  1 lvm   
      └─pve-vm--100--disk--0                  253:6    0    32G  0 lvm   
root@pve:~# pvs
  PV                                                    VG          Fmt  Attr PSize    PFree  
  /dev/mapper/luks-fa1483bd-f599-4dcf-9732-c09069472150 rna-mepdm-1 lvm2 a--    <1.75t 788.36g
  /dev/nvme0n1p3                                        pve         lvm2 a--  <931.01g  15.99g
root@pve:~# vgs
  VG          #PV #LV #SN Attr   VSize    VFree  
  pve           1   4   0 wz--n- <931.01g  15.99g
  rna-mepdm-1   1   2   0 wz--n-   <1.75t 788.36g
root@pve:~# lvs
  LV              VG          Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve         twi-aotz-- <794.79g             1.59   0.29                            
  root            pve         -wi-ao----   96.00g                                                    
  swap            pve         -wi-ao----    8.00g                                                    
  vm-100-disk-0   pve         Vwi-aotz--   32.00g data        39.54                                  
  rna-pbs-mepdm-1 rna-mepdm-1 -wi-ao----  500.00g                                                    
  vm-100-disk-0   rna-mepdm-1 -wi-ao----  500.00g
I have a bit of trouble getting clarity in my head. For one, I am struggling to understand the LV 'data' versus the Pool 'data' in the LVS output. And I am not quite clear about how to create a 100GB volume on nvme0n1p3 that is mounted inside the Ubuntu client on, say, directory /mnt/CData. I guess this starts with the top left 'Datacenter' entry in PVE, Storage, but then? Isn't that large part of the internal disk (nvme0n1p3) not already fully assigned to a LVM-Thin?
 
you dont manage lvm directly, nor do you create whole new logical volumes for disks. You can create slices from existing pool. You can either use GUI to create new disk, or CLI:
pvesm alloc local-lvm 100 vm-100-disk-1 100G
qm set 100 --scsihw virtio-scsi-pci --scsi1 local-lvm:vm-100-disk-1

note the controller and device name may be different in your configuration. Check with "qm 100 config".
After you have raw disk in VM then you can partition/format/mount it inside the VM.

You should review https://pve.proxmox.com/pve-docs/chapter-pvesm.html


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thank you. I rather use the GUI for this (less chance to make a mistake). As in: I tried the first command. It creates a volume. That shows up in the GUI in Storage local-lvm (pve) on the left, but I cannot remove it there as a 'guest' with VMID 100 exists?? But guest 100 doesn't show it in the GUI. I could simply add hardware in my guest and it created the disk, after which I also had vm-100-disk-2 and a dangling vm-100-disk-1 somewhere. I coud not remove either one, not even after detaching it from guest '100'. "qm 100 config' should be 'qm config 100' by the way, it doesn't show these new volumes but I can still not remove them from storage (and begin anew)

Code:
root@pve:~# qm config 100
balloon: 4096
boot: order=scsi0;ide2;net0
cores: 2
ide2: local:iso/ubuntu-22.04.1-live-server-amd64.iso,media=cdrom,size=1440306K
memory: 8192
meta: creation-qemu=7.0.0,ctime=1665090989
name: rna-mainserver-vm
net0: virtio=A6:97:9A:EF:7E:EE,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-100-disk-0,size=32G
scsi1: rna-mepdm-1:vm-100-disk-0,backup=0,size=500G
scsihw: virtio-scsi-pci
smbios1: uuid=7b07bbf7-d3d4-4252-85bb-8a4f0b720f82
sockets: 1
unused0: local-lvm:vm-100-disk-2
vmgenid: d0cb1769-021e-4b1d-ba6f-28a3d5e7eeb1

Ha, wait: an unused disk is still connected to this VM. I can remove the unused0 disk, but the one allocated with pvesm alloc still cannot be freed. But after 'qm disk rescan' it becomes visible in the GUI and I can remove it. Back to start. But now it is simpler: in the GUI in the VM's hardware section, simply Add a hard disk and select to create it from the local-lvm pool.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!