So I have a volume group (highperf) containing two logical volumes (lv1 and lv2) sitting on a RAID1 hardware array.
Array size is 278.46GB as reported by the RAID controller and LSI.sh monitoring script:
vgs "sees" the volume group (highperf) also as 278.46GB:
lvs also sees the logical volumes as 120GB + 158.46GB (total 278.46GB)
vgdisplay
lvdisplay
Each one of these logical volumes are passed to VM's via an entry in their "VMID.conf" files as such:
However, in each of these VM's the reported size is larger than the numbers found in PVE via the above commands. For example in the first VM, the disk size is reported as 129GB while it should be 120GB:
On the second VM, its reporting 171GB instead of 158.46GB!
Finally, the sizes reported by the Proxmox webUI are also wrong and are reported as 299GB (see screenshots). That would match the wrong sizes reported by the VM's (171+129=300)...
What's going on????
Array size is 278.46GB as reported by the RAID controller and LSI.sh monitoring script:
Code:
Virtual Drive: 1 (Target Id: 1)
Name :
RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0
Size : 278.464 GB
Sector Size : 512
Mirror Data : 278.464 GB
State : Optimal
Strip Size : 128 KB
Number Of Drives : 2
Span Depth : 1
Default Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disabled
Encryption Type : None
Is VD Cached: No
vgs "sees" the volume group (highperf) also as 278.46GB:
Code:
root@proxmox:~# vgs
VG #PV #LV #SN Attr VSize VFree
highperf 1 2 0 wz--n- 278.46g 0
pve 1 16 0 wz--n- 277.96g <16.00g
lvs also sees the logical volumes as 120GB + 158.46GB (total 278.46GB)
Code:
root@proxmox:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
highperf_lv1 highperf -wi-ao---- 120.00g
highperf_lv2 highperf -wi-ao---- 158.46g
data pve twi-aotz-- <181.02g 70.72 4.34
root pve -wi-ao---- 69.25g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi-aotz-- 10.00g data 30.41
vm-111-disk-0 pve Vwi-aotz-- 5.00g data 96.15
vm-199-disk-0 pve Vwi-aotz-- 10.00g data 30.82
vm-999-disk-0 pve Vwi-a-tz-- 10.00g data 28.50
vgdisplay
Code:
root@proxmox:~# vgdisplay
--- Volume group ---
VG Name highperf
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 11
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 278.46 GiB
PE Size 4.00 MiB
Total PE 71286
Alloc PE / Size 71286 / 278.46 GiB
Free PE / Size 0 / 0
VG UUID xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
lvdisplay
Code:
root@proxmox:~# lvdisplay
--- Logical volume ---
LV Path /dev/highperf/highperf_lv1
LV Name highperf_lv1
VG Name highperf
LV UUID xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
LV Write Access read/write
LV Creation host, time proxmox, 2015-07-01 21:19:27 -0400
LV Status available
# open 1
LV Size 120.00 GiB
Current LE 30720
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/highperf/highperf_lv2
LV Name highperf_lv2
VG Name highperf
LV UUID xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
LV Write Access read/write
LV Creation host, time proxmox, 2015-07-01 21:19:49 -0400
LV Status available
# open 1
LV Size 158.46 GiB
Current LE 40566
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
Each one of these logical volumes are passed to VM's via an entry in their "VMID.conf" files as such:
Code:
VM #1
virtio1: /dev/highperf/highperf_lv1,backup=no,size=120G
VM #2
virtio1: /dev/highperf/highperf_lv2,backup=no,size=158.46G
However, in each of these VM's the reported size is larger than the numbers found in PVE via the above commands. For example in the first VM, the disk size is reported as 129GB while it should be 120GB:
Code:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 8.4G 0 8.4G 0% /dev
tmpfs 8.4G 0 8.4G 0% /dev/shm
tmpfs 8.4G 26M 8.4G 1% /run
tmpfs 8.4G 0 8.4G 0% /sys/fs/cgroup
/dev/mapper/centos_centos--database-root 29G 3.6G 25G 13% /
/dev/vdb 129G 14G 115G 11% /mnt/sql-databases
/dev/vda1 521M 298M 224M 58% /boot
freenas:/mnt/zpool/storage/centos-database-data 5.0T 2.1T 2.9T 43% /mnt/data
freenas:/mnt/zpool/storage/centos-database-backup 5.0T 2.1T 2.9T 43% /mnt/backup
tmpfs
On the second VM, its reporting 171GB instead of 158.46GB!
Code:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 17G 0 17G 0% /dev
tmpfs 17G 0 17G 0% /dev/shm
tmpfs 17G 9.0M 17G 1% /run
tmpfs 17G 0 17G 0% /sys/fs/cgroup
/dev/mapper/rl_vm--template-root 8.6G 2.7G 5.9G 32% /
/dev/vda1 1.1G 295M 769M 28% /boot
freenas:/mnt/zpool/storage/database.tuxdomain-data 5.0T 2.1T 2.9T 43% /mnt/data
freenas:/mnt/zpool/storage/database.tuxdomain-backup 5.0T 2.1T 2.9T 43% /mnt/backup
tmpfs 3.4G 0 3.4G 0% /run/user/1000
/dev/vdb 171G 59G 112G 35% /mnt/sql-databases
Finally, the sizes reported by the Proxmox webUI are also wrong and are reported as 299GB (see screenshots). That would match the wrong sizes reported by the VM's (171+129=300)...
What's going on????