VM disk use all ZFS storage available instead of assigned space

xtavras

Renowned Member
Jun 29, 2015
30
2
73
Berlin
Hello,

I have a strange problem. My Proxmox uses ZFS Raidz2 with 68 TB usable space, and has one VM with 40TB big volume. At the moment about 30 TB are already in use on this disk according to `du` running within VM. But ZFS shows that disk uses all 68TB. How is it possible? The VM in question has VMID 110


Code:
# zfs list 
NAME                            USED  AVAIL  REFER  MOUNTPOINT 
rpool                          66.8T  1.24G   238K  /rpool 
rpool/ROOT                     2.37G  1.24G   219K  /rpool/ROOT 
rpool/ROOT/pve-1               2.37G  1.24G  2.37G  / 
rpool/data                     66.8T  1.24G   219K  /rpool/data 
rpool/data/base-11111-disk-0   3.54G  1.24G  3.54G  - 
rpool/data/base-11111-disk-1    686K  1.24G   686K  - 
rpool/data/vm-100-disk-0        667M  1.24G   667M  - 
rpool/data/vm-101-cloudinit     174K  1.24G   174K  - 
rpool/data/vm-101-disk-0        202M  1.24G   195M  - 
rpool/data/vm-101-disk-1       5.50G  1.24G  5.14G  - 
rpool/data/vm-102-cloudinit     174K  1.24G   174K  - 
rpool/data/vm-102-disk-0       4.33G  1.24G  4.33G  - 
rpool/data/vm-110-disk-0        128K  1.24G   128K  - 
rpool/data/vm-110-disk-1       66.8T  1.24G  66.8T  -

Code:
# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool  90.5T  87.7T  2.83T         -     1%    96%  1.00x  ONLINE  -


Code:
cat /etc/pve/qemu-server/110.conf
bootdisk: scsi0
cores: 2
ide2: local:iso/systemrescuecd-x86-5.3.2.iso,media=cdrom,size=572188K
memory: 7000
name: backup3-old.bb.example.com
net0: virtio=00:50:56:00:A5:F7,bridge=vmbr0
numa: 0
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=45d75232-c6cd-470a-8333-916cf3225793
sockets: 1
virtio0: local-zfs:vm-110-disk-0,size=16G
virtio1: local-zfs:vm-110-disk-1,size=41000G
vmgenid: 5cf28b20-3bea-4782-b1b6-3f1061b74200


Code:
pveversion -v
proxmox-ve: 5.3-1 (running kernel: 4.15.18-9-pve)
pve-manager: 5.3-5 (running version: 5.3-5/97ae681d)
pve-kernel-4.15: 5.2-12
pve-kernel-4.15.18-9-pve: 4.15.18-30
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-3
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-43
libpve-guest-common-perl: 2.0-18
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-33
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-5
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-22
pve-cluster: 5.0-31
pve-container: 2.0-31
pve-docs: 5.3-1
pve-edk2-firmware: 1.20181023-1
pve-firewall: 3.0-16
pve-firmware: 2.0-6
pve-ha-manager: 2.0-5
pve-i18n: 1.0-9
pve-libspice-server1: 0.14.1-1
pve-qemu-kvm: 2.12.1-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-43
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.12-pve1~bpo1
 

Attachments

  • sht_Auswahl_093.png
    sht_Auswahl_093.png
    31.1 KB · Views: 7
Hi,

du use static block size what is not true with ZFS. ZFS use dynamic block size.
Then you loos about 4% metadata and if you have snapshots as Tapio Lehtonen meant this space is also used.
Then ZFSZ2 has a parity of 2 disk this need also extra space.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!