restored vm on thin provisioned drbd9 storage is no more "thin"

mmenaz

Renowned Member
Jun 25, 2009
838
29
93
Northern east Italy
Hi, I've prox 4.1 (enterprise repo) and drbd9 cluster with thin provisioning enabled. I've seen that if I create a VM in drbd9 storage is correctly thin provisioned, but if I restore the VM then 100% (almost) of the space is alloccated.
i.e. I've backedup VM 108 and restored as VM 999, and lvs shows this:
Code:
root@prox01:~# lvs
  LV  VG  Attr  LSize  Pool  Origin Data%  Meta%  Move Log Cpy%Sync Convert
  .drbdctrl_0  drbdpool -wi-ao----  4.00m   
  .drbdctrl_1  drbdpool -wi-ao----  4.00m   
  drbdthinpool  drbdpool twi-aotz--  1.42t  27.72  14.41   
[cut]
  vm-108-disk-1_00 drbdpool Vwi-aotz--  8.00g drbdthinpool  8.32   
  vm-999-disk-1_00 drbdpool Vwi-aotz--  8.00g drbdthinpool  99.98   
  data  pve  -wi-ao---- 100.44g   
  root  pve  -wi-ao----  46.50g   
  swap  pve  -wi-ao----  23.25g

Is it a bug or something I'm missing?
Thanks in advance

Code:
root@prox01:~# pveversion -v
proxmox-ve: 4.1-37 (running kernel: 4.2.8-1-pve)
pve-manager: 4.1-13 (running version: 4.1-13/cfb599fb)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-37
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-32
qemu-server: 4.0-55
pve-firmware: 1.1-7
libpve-common-perl: 4.0-48
libpve-access-control: 4.0-11
libpve-storage-perl: 4.0-40
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-5
pve-container: 1.0-44
pve-firewall: 2.0-17
pve-ha-manager: 1.0-21
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 0.13-pve3
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve7~jessie
drbdmanage: 0.91-1