Hello proxmox
I am a novice with proxmox, ZFS and I have to understand our situation because our physical (Debian 9.7) server space is full. This is not my own installation and configuration, i don't have any history about its management.
When i audit, one the VMs (102) has 7TB disks at proxmox level (this is a file server with windows 2k16) but when i look at ZFS level, storage/vm-102-disk-2 is 9.25TB, and when i look at the windows level, the data used 5.5TB. I am not able to understand why ZFS used space is almost twice of "real" windows used space.
There are no snapshots, no copies, compression is lz4.
So does the ZFS metadata use all this space ? Does the parity with raidz1 is poor optimization ? Does block size is miss configure somewhere ? Something wrong with ZFS configuration or proxmox ?
Thank you very much for your help
Best regards, Kenny.
cat /etc/pve/qemu-server/102.conf
zfs list -o space -r storage
zfs list
zpool list
zpool status
I am a novice with proxmox, ZFS and I have to understand our situation because our physical (Debian 9.7) server space is full. This is not my own installation and configuration, i don't have any history about its management.
When i audit, one the VMs (102) has 7TB disks at proxmox level (this is a file server with windows 2k16) but when i look at ZFS level, storage/vm-102-disk-2 is 9.25TB, and when i look at the windows level, the data used 5.5TB. I am not able to understand why ZFS used space is almost twice of "real" windows used space.
There are no snapshots, no copies, compression is lz4.
So does the ZFS metadata use all this space ? Does the parity with raidz1 is poor optimization ? Does block size is miss configure somewhere ? Something wrong with ZFS configuration or proxmox ?
Thank you very much for your help
Best regards, Kenny.
cat /etc/pve/qemu-server/102.conf
Code:
scsi0: storage:vm-102-disk-1,discard=on,size=100G
scsi1: storage:vm-102-disk-2,size=7000G
zfs list -o space -r storage
Code:
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
storage 3,51G 9,32T 0B 162K 0B 9,32T
storage/base-101-disk-1 3,51G 14,0G 13,5K 14,0G 0B 0B
storage/vm-100-disk-1 3,51G 38,7G 0B 38,7G 0B 0B
storage/vm-102-disk-1 3,51G 22,7G 0B 22,7G 0B 0B
storage/vm-102-disk-2 3,51G 9,25T 0B 9,25T 0B 0B
zfs list
Code:
NAME USED AVAIL REFER MOUNTPOINT
storage 9,32T 3,50G 162K /storage
storage/base-101-disk-1 14,0G 3,50G 14,0G -
storage/vm-100-disk-1 38,7G 3,50G 38,7G -
storage/vm-102-disk-1 22,7G 3,50G 22,7G -
storage/vm-102-disk-2 9,25T 3,50G 9,25T -
zpool list
Code:
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
storage 11,4T 11,1T 370G - 51% 96% 1.00x ONLINE -
zpool status
Code:
pool: storage
state: ONLINE
scan: scrub repaired 0B in 14h21m with 0 errors on Sun Feb 13 14:45:40 2022
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
scsi-350000398581be825 ONLINE 0 0 0
scsi-35000039858207311 ONLINE 0 0 0
scsi-350000398582084c9 ONLINE 0 0 0
scsi-350000398581be1a1 ONLINE 0 0 0
scsi-35000039858207a05 ONLINE 0 0 0
scsi-350000398581b7209 ONLINE 0 0 0
scsi-35000c500a10c210b ONLINE 0 0 0
logs
scsi-33001438042b37703-part1 ONLINE 0 0 0
cache
scsi-33001438042b37703-part2 ONLINE 0 0 0
errors: No known data errors