Hi, I am new to Proxmox and migrating from Hyper-V has been something of a pain. I am getting extremely frustrated and considering moving back to Hyper-V. Please, please, please help me understand.
I have 3 x 12TB disks, set up in a RAIDZ-1 array. Assuming I lose one of these disks to parity, that should leave me with 23.8TB of formatted, real, usable capacity.
In the Proxmox GUI, it correctly shows 23.83 TB of space in the zfsdata pool.
In this zpool I have some smaller disk images and 3 large ones. One is 10TB, one is 1TB and the other is 2.5TB (shown below as using 13.3T, 1.36T and 3.33T):
I have no snapshots:
By my calculations, I have used ~14TB of my 23.8TB capacity, so I should have about 9.8TB left.
However, the GUI and the "zfs list" above is showing that I only have about 3.5TB of space left. How can this possibly be? Where has the missing 6TB gone? I've already given up an entire disk for parity (as expected) so it can't possibly be more parity.
Any help gratefully received.
				
			I have 3 x 12TB disks, set up in a RAIDZ-1 array. Assuming I lose one of these disks to parity, that should leave me with 23.8TB of formatted, real, usable capacity.
root@abe:~# zpool status
  pool: zfsdata
 state: ONLINE
config:
        NAME                                    STATE     READ WRITE CKSUM
        zfsdata                                 ONLINE       0     0     0
          raidz1-0                              ONLINE       0     0     0
            ata-WDC_WD120EMFZ-11A6JA0_XJG004GM  ONLINE       0     0     0
            ata-WDC_WD120EMAZ-11BLFA0_5PGW3M9E  ONLINE       0     0     0
            ata-WDC_WD120EDBZ-11B1HA0_5QG4TBGF  ONLINE       0     0     0In the Proxmox GUI, it correctly shows 23.83 TB of space in the zfsdata pool.
In this zpool I have some smaller disk images and 3 large ones. One is 10TB, one is 1TB and the other is 2.5TB (shown below as using 13.3T, 1.36T and 3.33T):
root@abe:~# zfs list
NAME                    USED  AVAIL     REFER  MOUNTPOINT
zfsdata                18.3T  3.36T      128K  /zfsdata
zfsdata/vm-100-disk-0  43.7G  3.39T     5.71G  -
zfsdata/vm-101-disk-0  13.3T  8.17T     8.51T  -
zfsdata/vm-101-disk-1  43.7G  3.38T     20.9G  -
zfsdata/vm-101-disk-2  3.33M  3.36T      176K  -
zfsdata/vm-101-disk-3  1.36T  3.82T      925G  -
zfsdata/vm-102-disk-0  3.33M  3.36T      229K  -
zfsdata/vm-102-disk-1  43.7G  3.38T     15.1G  -
zfsdata/vm-102-disk-3  3.33T  4.64T     2.05T  -
zfsdata/vm-103-disk-0  3.33M  3.36T      144K  -
zfsdata/vm-103-disk-1  7.33M  3.36T     90.6K  -
zfsdata/vm-103-disk-2   175G  3.51T     16.2G  -I have no snapshots:
root@abe:~# zfs list -t snapshot
no datasets availableBy my calculations, I have used ~14TB of my 23.8TB capacity, so I should have about 9.8TB left.
However, the GUI and the "zfs list" above is showing that I only have about 3.5TB of space left. How can this possibly be? Where has the missing 6TB gone? I've already given up an entire disk for parity (as expected) so it can't possibly be more parity.
Any help gratefully received.
			
				Last edited: