Hello,
I have a freshly upgraded pve 5 installation, and I think there's something wrong in lvs command output for thin lv.
When i call lvs command with option for showing snapshot sizes , the output reports Snap% equal to Data% for thin provisioned volumes (last two columns). I have no snapshot currently.
I'm deeply impacted by this, because onboard DRBD9 is using these thinlv as backend, and a terse version of this command is issued by drbdmanage to calculate free space. Obiouvsly the sum of Data + Snap space is more than 100% , leaving no free space.
I tested the same command on another machine with PVE 4 on board
and the %Snap column is empty
My context:
Bye,
rob
I have a freshly upgraded pve 5 installation, and I think there's something wrong in lvs command output for thin lv.
When i call lvs command with option for showing snapshot sizes , the output reports Snap% equal to Data% for thin provisioned volumes (last two columns). I have no snapshot currently.
Code:
# lvs -o+size,data_percent,snap_percent
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert LSize Data% Snap%
.drbdctrl_0 drbdpool -wi-ao---- 4,00m 4,00m
.drbdctrl_1 drbdpool -wi-ao---- 4,00m 4,00m
drbdthinpool drbdpool twi-aotz-- 1,56t 61,16 31,23 1,56t 61,16 61,16
vm-100-disk-2_00 drbdpool Vwi-aotz-- 10,00g drbdthinpool 99,99 10,00g 99,99 99,99
vm-101-disk-1_00 drbdpool Vwi-aotz-- 4,00g drbdthinpool 99,93 4,00g 99,93 99,93
vm-101-disk-2_00 drbdpool Vwi-aotz-- 60,02g drbdthinpool 100,00 60,02g 100,00 100,00
vm-102-disk-1_00 drbdpool Vwi-aotz-- 12,00g drbdthinpool 99,99 12,00g 99,99 99,99
vm-103-disk-1_00 drbdpool Vwi-aotz-- 10,01g drbdthinpool 99,91 10,01g 99,91 99,91
vm-103-disk-2_00 drbdpool Vwi-aotz-- 5,01g drbdthinpool 99,88 5,01g 99,88 99,88
vm-104-disk-1_00 drbdpool Vwi-aotz-- 10,00g drbdthinpool 86,28 10,00g 86,28 86,28
vm-104-disk-2_00 drbdpool Vwi-aotz-- 900,20g drbdthinpool 93,52 900,20g 93,52 93,52
vm-106-disk-1_00 drbdpool Vwi-a-tz-- 8,01g drbdthinpool 0,00 8,01g 0,00 0,00
vm-121-disk-1_00 drbdpool Vwi-aotz-- 15,02g drbdthinpool 99,92 15,02g 99,92 99,92
vm-122-disk-1_00 drbdpool Vwi-aotz-- 8,01g drbdthinpool 85,68 8,01g 85,68 85,68
vm-123-disk-1_00 drbdpool Vwi-aotz-- 5,01g drbdthinpool 99,88 5,01g 99,88 99,88
data pve -wi-ao---- 16,82g 16,82g
root pve -wi-ao---- 7,25g 7,25g
swap pve -wi-ao---- 2,00g 2,00g
I'm deeply impacted by this, because onboard DRBD9 is using these thinlv as backend, and a terse version of this command is issued by drbdmanage to calculate free space. Obiouvsly the sum of Data + Snap space is more than 100% , leaving no free space.
I tested the same command on another machine with PVE 4 on board
Code:
LVM version: 2.02.116(2) (2015-01-30), kernel: 4.4.35-1-pve
My context:
Code:
# lvs --version
LVM version: 2.02.168(2) (2016-11-30)
Library version: 1.02.137 (2016-11-30)
Driver version: 4.35.0
# uname -r
4.10.17-2-pve
# pveversion -v
proxmox-ve: 5.0-20 (running kernel: 4.10.17-2-pve)
pve-manager: 5.0-30 (running version: 5.0-30/5ab26bc)
pve-kernel-4.10.17-2-pve: 4.10.17-20
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve3
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-12
qemu-server: 5.0-15
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-6
libpve-storage-perl: 5.0-14
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-9
pve-qemu-kvm: 2.9.0-4
pve-container: 2.0-15
pve-firewall: 3.0-2
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.11-pve17~bpo90
Bye,
rob