LVM Setup Help

daman92

New Member
Dec 18, 2019
7
0
1
41
Hello, I'm new to Proxmox (and LVM) and can't figure out how to get my LVM partitioned drives to recognize for there full logical volume size in Proxmox. I have 18TB usable in a raid 6 on an H700 raid card. I have setup 1 physical volume and 2 logical (VMs - 3T, and dataStore- 15T). In Proxmox it only shows 62.92Gb for both VMs and dataStore volumes. I have worked on this for a few days so I'm hoping someone here can help. Here are the output of some common commands you'll probably need and thanks for your help!

Code:
pvs

  PV         VG  Fmt  Attr PSize   PFree
  /dev/sda3  pve lvm2 a--  135.62g 16.00g
  /dev/sdb   vg1 lvm2 a--  <18.19t <8.68g


Code:
lvs

  /dev/sdd: open failed: No medium found
  /dev/sde: open failed: No medium found

  LV        VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data      pve twi-a-tz-- 75.87g             0.00   1.60
  root      pve -wi-ao---- 33.75g
  swap      pve -wi-ao----  8.00g
  VMs       vg1 -wi-ao----  3.00t
  dataStore vg1 -wi-ao---- 15.18t

Code:
vgdisplay

  /dev/sdd: open failed: No medium found
  /dev/sde: open failed: No medium found

  --- Volume group ---
  VG Name               vg1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <18.19 TiB
  PE Size               4.00 MiB
  Total PE              4767999
  Alloc PE / Size       4765778 / 18.18 TiB
  Free  PE / Size       2221 / <8.68 GiB
  VG UUID               kM8s14-26Zc-iBGt-4VN5-tC7G-OK7s-b1gb6i

Code:
df -h
Filesystem                 Size  Used Avail Use% Mounted on
udev                        63G     0   63G   0% /dev
tmpfs                       13G  9.2M   13G   1% /run
/dev/mapper/pve-root        33G  2.0G   30G   7% /
tmpfs                       63G   43M   63G   1% /dev/shm
tmpfs                      5.0M     0  5.0M   0% /run/lock
tmpfs                       63G     0   63G   0% /sys/fs/cgroup
/dev/mapper/vg1-VMs        3.0T   89M  2.9T   1% /var/VMs
/dev/mapper/vg1-dataStore   16T   11M   15T   1% /var/dataStore
/dev/sda2                  511M  324K  511M   1% /boot/efi
/dev/fuse                   30M   16K   30M   1% /etc/pve
tmpfs                       13G     0   13G   0% /run/user/0

Code:
lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 136.1G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0   512M  0 part /boot/efi
└─sda3               8:3    0 135.6G  0 part
  ├─pve-swap       253:2    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:3    0  33.8G  0 lvm  /
  ├─pve-data_tmeta 253:4    0     1G  0 lvm
  │ └─pve-data     253:6    0  75.9G  0 lvm
  └─pve-data_tdata 253:5    0  75.9G  0 lvm
    └─pve-data     253:6    0  75.9G  0 lvm
sdb                  8:16   0  18.2T  0 disk
├─vg1-VMs          253:0    0     3T  0 lvm  /var/VMs
└─vg1-dataStore    253:1    0  15.2T  0 lvm  /var/dataStore
sdc                  8:32   1  29.8G  0 disk
├─sdc1               8:33   1   238K  0 part
├─sdc2               8:34   1   2.8M  0 part
├─sdc3               8:35   1 772.8M  0 part
└─sdc4               8:36   1   300K  0 part
sr0                 11:0    1  1024M  0 rom
 
Yes, it's in the GUI. Screen shot enclosed. I will add, that I thought it might just be displaying the wrong data, but when I tried to start a VM on the VMs logical volume, the install failed because it ran out of room.

Screenshot_3.png
 

Attachments

  • Screenshot_4.png
    Screenshot_4.png
    96.3 KB · Views: 8
  • Screenshot_5.png
    Screenshot_5.png
    67.6 KB · Views: 5
  • Screenshot_6.png
    Screenshot_6.png
    104.2 KB · Views: 5
Last edited:
could you post your /etc/storage.cfg as well? also please the output of pveversion -v
 
So, I don't have a file /etc/storage.cfg ...

but I do have a file in /etc/pve/storage.cfg
Code:
dir: local
    path /var/lib/vz
    content iso,vztmpl,backup

lvmthin: local-lvm
    thinpool data
    vgname pve
    content rootdir,images

dir: VMs
    path /dev/VMs
    content iso,rootdir,snippets,images,vztmpl,backup
    maxfiles 1
    shared 0

dir: dataShare
    path /dev/dataShare
    content iso,rootdir,snippets,images,vztmpl,backup
    maxfiles 1
    shared 1

Code:
pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.10-1-pve)
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-2
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
 
Last edited:
sorry, i've meant /etc/pve/storage.cfg
 
dir: local
path /var/lib/vz
content iso,vztmpl,backup

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

dir: VMs
path /dev/VMs
content iso,rootdir,snippets,images,vztmpl,backup
maxfiles 1
shared 0

dir: dataShare
path /dev/dataShare
content iso,rootdir,snippets,images,vztmpl,backup
maxfiles 1
shared 1
 
why are your storages added as directory storage and not lvm/lvmthin ? how did you add them?
 
I set the LVM up from command line because I couldn't get the drive to add through the GUI. I really don't have a preference for it, I thought that's what I needed to use for the VMs and I thought it would make resizing easier in the future if i needed more space one way or the other.

I added the dataStore as a directory because that's how I'd like to use it. The VMs I did as a directory just because that's what I did with the dataStore
 
can you correct them to lvm in the /etc/storage.cfg ? maybe that could solve your problem. this is quite puzzling, and i think it might have to do with it being detected differently
 
I did that, no luck. Is there a easier way I can partition this array out? really all I want is a 3TB partition for VMs and containers, and a 15TB storage area. Do I have to use LVM for VMs and containers?
 
There is something going wrong in general:
From your df output:
Code:
/dev/mapper/vg1-VMs        3.0T   89M  2.9T   1% /var/VMs
/dev/mapper/vg1-dataStore   16T   11M   15T   1% /var/dataStore

From your Storage config:
Code:
dir: VMs
path /dev/VMs
content iso,rootdir,snippets,images,vztmpl,backup
maxfiles 1
shared 0

dir: dataShare
path /dev/dataShare
content iso,rootdir,snippets,images,vztmpl,backup
maxfiles 1
shared 1

Why did you set the path to /dev/VMs and /dev/dataShare instead of /var/VMs and /var/dataShare in the storage configuration?

Proxmox VE can use LVM directly for VM and container disks. There is no need to create a volume, format it and mount it yourself to configure a directory storage on it.

Why did you activate the shared option for the dataShare storage? If you think this will enable you to share data between your VMs I must disappoint you. This option tells Proxmox VE that each node can access the storage directly if used in a cluster of multiple Proxmox VE nodes.