ZFS Size not adding up on windows?

killmasta93

Renowned Member
Aug 13, 2017
973
58
68
31
Hi,
I was wondering if this has happened to someone else before, I was checking the size on the ZFS pool but i recheck it on windows VM disk and it seems that the Disk of the ZFS is much bigger on used not sure why? And i do have the compression feature on

root@prometheus7:~# zfs get compression vmbaks
NAME PROPERTY VALUE SOURCE
vmbaks compression on local



vmbaks/vmbaks2/vm-100-disk-0 367G 467G 338G -
vmbaks/vmbaks2/vm-104-disk-0 213G 467G 146G -
vmbaks/vmbaks2/vm-104-disk-1 1.23T 467G 1.08T -
virtio0: vmbaks2:vm-104-disk-0,cache=writeback,size=120G
virtio1: vmbaks2:vm-104-disk-1,cache=writeback,size=900G
virtio0: vmbaks2:vm-100-disk-0,cache=writeback,size=250G

Code:
root@prometheus7:~# pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-5 (running version: 6.1-5/9bf06119)
pve-kernel-5.3: 6.1-1
pve-kernel-helper: 6.1-1
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-2
pve-cluster: 6.1-2
pve-container: 3.0-16
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
pve-zsync: 2.0-2
qemu-server: 6.1-4
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
 
Do you have some snapshots of that VM? What kind of level is it (zpool status)?
 
thanks for the reply, i think you right it might be the snapshots i did delete some, but the math might not be adding for me
the vm disk 104-disk-1 gave it 900 gigs and adding the snapshots does not give me 1.08tb with the 900gigs

vmbaks/vmbaks2/vm-104-disk-0@rep_bakfileserver_2020-06-29_14:30:01 511M - 146G -
vmbaks/vmbaks2/vm-104-disk-0@rep_bakfileserver_2020-06-29_15:30:01 104M - 146G -
vmbaks/vmbaks2/vm-104-disk-0@rep_bakfileserver_2020-06-29_15:45:01 110M - 146G -
vmbaks/vmbaks2/vm-104-disk-1@rep_bakfileserver_2020-06-29_14:30:01 113M - 1.08T -
vmbaks/vmbaks2/vm-104-disk-1@rep_bakfileserver_2020-06-29_15:30:01 8.65M - 1.08T -
vmbaks/vmbaks2/vm-104-disk-1@rep_bakfileserver_2020-06-29_15:45:01 83.3M - 1.08T -

vmbaks/vmbaks2/vm-100-disk-0 338G 707G 338G -
vmbaks/vmbaks2/vm-104-disk-0 150G 707G 146G -
vmbaks/vmbaks2/vm-104-disk-1 1.08T 707G 1.08T -
 
What RAID level is it?
Can you show the output of zpool status?
 
Thanks for the reply,

Code:
  pool: vmbaks
 state: ONLINE
  scan: scrub repaired 0B in 0 days 04:49:22 with 0 errors on Sun Jun 28 17:55:27 2020
config:

    NAME                        STATE     READ WRITE CKSUM
    vmbaks                      ONLINE       0     0     0
      raidz1-0                  ONLINE       0     0     0
        wwn-0x5000cca02c28f594  ONLINE       0     0     0
        wwn-0x5000cca02c2a6924  ONLINE       0     0     0
        wwn-0x5000cca02c29dad0  ONLINE       0     0     0
        wwn-0x5000cca02c29ccf4  ONLINE       0     0     0
 
thanks for the reply, as for the command
root@prometheus7:~# zfs get all vmbaks/vmbaks2/vm-104-disk-1
NAME PROPERTY VALUE SOURCE
vmbaks/vmbaks2/vm-104-disk-1 type volume -
vmbaks/vmbaks2/vm-104-disk-1 creation Thu Mar 12 20:19 2020 -
vmbaks/vmbaks2/vm-104-disk-1 used 1.08T -
vmbaks/vmbaks2/vm-104-disk-1 available 709G -
vmbaks/vmbaks2/vm-104-disk-1 referenced 1.08T -
vmbaks/vmbaks2/vm-104-disk-1 compressratio 1.07x -
vmbaks/vmbaks2/vm-104-disk-1 reservation none default
vmbaks/vmbaks2/vm-104-disk-1 volsize 900G local
vmbaks/vmbaks2/vm-104-disk-1 volblocksize 8K default
vmbaks/vmbaks2/vm-104-disk-1 checksum on default
vmbaks/vmbaks2/vm-104-disk-1 compression on inherited from vmbaks
vmbaks/vmbaks2/vm-104-disk-1 readonly off default
vmbaks/vmbaks2/vm-104-disk-1 createtxg 1773854 -
vmbaks/vmbaks2/vm-104-disk-1 copies 1 default
vmbaks/vmbaks2/vm-104-disk-1 refreservation none default
vmbaks/vmbaks2/vm-104-disk-1 guid 12737608601259361271 -
vmbaks/vmbaks2/vm-104-disk-1 primarycache all default
vmbaks/vmbaks2/vm-104-disk-1 secondarycache all default
vmbaks/vmbaks2/vm-104-disk-1 usedbysnapshots 178M -
vmbaks/vmbaks2/vm-104-disk-1 usedbydataset 1.08T -
vmbaks/vmbaks2/vm-104-disk-1 usedbychildren 0B -
vmbaks/vmbaks2/vm-104-disk-1 usedbyrefreservation 0B -
vmbaks/vmbaks2/vm-104-disk-1 logbias latency default
vmbaks/vmbaks2/vm-104-disk-1 objsetid 134594 -
vmbaks/vmbaks2/vm-104-disk-1 dedup off default
vmbaks/vmbaks2/vm-104-disk-1 mlslabel none default
vmbaks/vmbaks2/vm-104-disk-1 sync standard default
vmbaks/vmbaks2/vm-104-disk-1 refcompressratio 1.07x -
vmbaks/vmbaks2/vm-104-disk-1 written 2.23M -
vmbaks/vmbaks2/vm-104-disk-1 logicalused 815G -
vmbaks/vmbaks2/vm-104-disk-1 logicalreferenced 815G -
vmbaks/vmbaks2/vm-104-disk-1 volmode default default
vmbaks/vmbaks2/vm-104-disk-1 snapshot_limit none default
vmbaks/vmbaks2/vm-104-disk-1 snapshot_count none default
vmbaks/vmbaks2/vm-104-disk-1 snapdev hidden default
vmbaks/vmbaks2/vm-104-disk-1 context none default
vmbaks/vmbaks2/vm-104-disk-1 fscontext none default
vmbaks/vmbaks2/vm-104-disk-1 defcontext none default
vmbaks/vmbaks2/vm-104-disk-1 rootcontext none default
vmbaks/vmbaks2/vm-104-disk-1 redundant_metadata all default
vmbaks/vmbaks2/vm-104-disk-1 encryption off default
vmbaks/vmbaks2/vm-104-disk-1 keylocation none default
vmbaks/vmbaks2/vm-104-disk-1 keyformat none default
vmbaks/vmbaks2/vm-104-disk-1 pbkdf2iters 0 default


as @LnxBil trimming would work on a server with SAS disks? is it recommended?

Thank you
 
So quick question the trim option is on the VM? where it says the check box discard? and how could i check if trim is enable on the ZFS?
 
So quick question the trim option is on the VM? where it says the check box discard?

Yes, the discard box and you have to use a supported set of settings (SATA, VirtIO or SCSI with VirtIO SCSI) in order to have the TRIM I/O call available. The OS also has to support this.

and how could i check if trim is enable on the ZFS?

You cannot, because it is how ZFS is built - only used space is referenced. If your guest OS says, that a block on its virtual disk is to be trimmed, it is freed in ZFS.
There is, however, also TRIM on the ZFS pool side if you have SSDs running as your backing storage, that also supports TRIM, but as you said you're using SAS disks (not SSDs), you do not have or need TRIM on the ZFS pool.
 
thanks for the reply, so i realized that many of vm dont add up with the zfs usage but i try to add the discard option but its greyed out i only see that option open when i add a new disk

1594002911934.png
 
Well
I made a Thread, that is really alike.

I found out, that the discard option was not my problem. BUT. I found out, that if you check the Box "Backup" upon creation, the Disk will consume (in my case) about 230% of it's size in the ZFS.

My only solution so far:
Delete the disk
Make a new disk, this time without checking Backup upon creation (you can change that later without running into trouble again)
Yeah for me it just works so far.
BUT! ZFS afterwards will not respect disk sizes. So YOU have to keep track of the virtualHard Drives in it,because the space will not be reserved earlier.
I tried to set one drive 50TB, while the Pool only gave me 34TB, and the disk was creatied, ZFS still showed only 5MB usage (since the disk was empty at that point)
 
Thanks for the reply, not sure what you mean "backup" as on proxmox i only see the option no backup
1594039781228.png
 
I found out, that if you check the Box "Backup" upon creation, the Disk will consume (in my case) about 230% of it's size in the ZFS.

I hardly doubt that. Backup and ZFS not related. Nevertheless I checked and it is not the case.

BUT! ZFS afterwards will not respect disk sizes.

What do you mean by that?

The concepts and actual day-to-day work with ZFS are really different to a non-CoW-based filesystem, so please read up on its usage. I can recommend the books from Jude & Lucas
 
Well fabian did say the same, as Backup and ZFS do not know each other, but it's just an Assignment for the Proxmox.


Well I meant, that created disks on a Thin-Proviosioned pool. do not take up parts of the pool. Therefore ZFS will not report the disk size you set, but the actual taken Space at that point.
 
Thanks for the reply, not sure what you mean "backup" as on proxmox i only see the option no backup
Yeah I am sorry. I meant no backup.

but as LnxBill said, normally that should not interefere. Well this was only the observation I made yesterday, I will try to reproduce and then post again
 
well the no backup kinda screws with my setup, as i need to backup using vz dumps also i backup using pyznap and pve-zsync (cant be too careful) its just very odd how the math doesn't add up
 
Okay I tried to reproduce it and I am not able to.

I guess maybe I created the first drive before selecting Thin-Provision as Option.

Now I am worried, that the disk will take up more space now, just not showing it and then the system will stop at half full.
 
I guess maybe I created the first drive before selecting Thin-Provision as Option.
That could be the case. Without thin provisioning, the space is reserved beforehand, this can be bad, but it can also be really good, if you need a working VM besides your pool being completely full.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!