Cannot allocate 2TB disk space while over 3TB is free

Nov 4, 2020
7
1
8
25
I'm trying to add a large disk to a vm, There's plenty of space on the zfs pool (tank2) according to df or the Proxmox UI. Still when I try to add a disk greater than 1TB I get an error:

zfs error: cannot create 'tank2/vm-111-disk-2': out of space at /usr/share/perl5/PVE/API2/Qemu.pm line 1442. (500)

df reports 3.8T available, while the UI says it's got 4.71 TB (in the add file dialog).

Still, adding 2TB fails...

Why is this and why do df and the UI report different values?

Thanks.


cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,iso,vztmpl
prune-backups keep-last=3
shared 0

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

zfspool: tank1
pool tank1
content rootdir,images

zfspool: tank2
pool tank2
content rootdir,images

root@pve:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 21M 13G 1% /run
/dev/mapper/pve-root 46G 38G 6.0G 87% /
tmpfs 63G 43M 63G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
tank1 2.5T 81G 2.4T 4% /tank1
tank2 4.0T 230G 3.8T 6% /tank2
tank1/subvol-119-disk-0 16G 698M 16G 5% /tank1/subvol-119-disk-0
tank1/subvol-107-disk-0 32G 3.1G 29G 10% /tank1/subvol-107-disk-0
tank1/subvol-127-disk-0 30G 9.3G 21G 31% /tank1/subvol-127-disk-0
tank1/subvol-123-disk-0 16G 2.3G 14G 15% /tank1/subvol-123-disk-0
tank1/subvol-116-disk-0 32G 3.3G 29G 11% /tank1/subvol-116-disk-0
tank1/subvol-128-disk-0 1.2T 996G 205G 83% /tank1/subvol-128-disk-0
tank1/subvol-124-disk-0 32G 4.8G 28G 15% /tank1/subvol-124-disk-0
tank1/subvol-121-disk-0 30G 30G 7.2M 100% /tank1/subvol-121-disk-0
tank2/subvol-117-disk-0 100G 22G 79G 22% /tank2/subvol-117-disk-0
tank2/subvol-100-disk-0 64G 3.3G 61G 6% /tank2/subvol-100-disk-0
tank2/subvol-102-disk-1 8.0G 3.1G 5.0G 39% /tank2/subvol-102-disk-1
tank2/subvol-102-disk-0 8.0G 1.5G 6.6G 19% /tank2/subvol-102-disk-0
tank2/subvol-125-disk-0 32G 1.6G 31G 5% /tank2/subvol-125-disk-0
/dev/fuse 30M 40K 30M 1% /etc/pve
tmpfs 13G 0 13G 0% /run/user/0
 
Dont use df, use zfs list to see how much space is left.
And I guess you are using a raidz1/2/3 and didn't increased the volblocksize? In that case you get alot of padding overhead and it might be possible that storing a 2 TB virtual disk will consume 4TB on your pool so you are running out of space.

And also keep in mind that ZFS should always have 20% free space so you dont want to fill up your pool.

So what does your pool look like (output of zpool status and zfs get volblocksize,ashift YourPool would be useful?
 
Last edited:
  • Like
Reactions: PvdL
Thanks for your reply. My ZFS knowledge is limited, unfortunately.

zpool status

Code:
  pool: tank1
 state: ONLINE
  scan: scrub repaired 0B in 03:02:26 with 0 errors on Sun Nov 14 03:26:28 2021
config:

        NAME                        STATE     READ WRITE CKSUM
        tank1                       ONLINE       0     0     0
          raidz2-0                  ONLINE       0     0     0
            scsi-35000c500988aca5f  ONLINE       0     0     0
            scsi-35000c5009885c9cb  ONLINE       0     0     0
            scsi-35000c500988adacb  ONLINE       0     0     0
            scsi-35000c500988ada4b  ONLINE       0     0     0
            scsi-35000c500988b08e7  ONLINE       0     0     0
            scsi-35000c500988ac36b  ONLINE       0     0     0
            scsi-35000c500988b1977  ONLINE       0     0     0
            scsi-35000c5009880029b  ONLINE       0     0     0

errors: No known data errors

  pool: tank2
 state: ONLINE
  scan: scrub repaired 0B in 09:00:15 with 0 errors on Sun Nov 14 09:24:23 2021
config:

        NAME                        STATE     READ WRITE CKSUM
        tank2                       ONLINE       0     0     0
          raidz2-0                  ONLINE       0     0     0
            scsi-35000c500841a8f7b  ONLINE       0     0     0
            scsi-35000c500841ad8e3  ONLINE       0     0     0
            scsi-35000c500841ab27b  ONLINE       0     0     0
            scsi-35000c500840bae63  ONLINE       0     0     0
            scsi-35000c50055a3bc13  ONLINE       0     0     0
            scsi-35000c500841ac5b7  ONLINE       0     0     0
            scsi-35000c500841ad7ef  ONLINE       0     0     0
            scsi-35000c500840bdab3  ONLINE       0     0     0
            scsi-35000c500841a866b  ONLINE       0     0     0
            scsi-35000c500841ab8bb  ONLINE       0     0     0

zfs get volblocksize tank1 tank2

Code:
NAME   PROPERTY      VALUE     SOURCE
tank1  volblocksize  -         -
tank2  volblocksize  -         -

The ashift setting is not recognized, but I can find it using zdb:

zdb -C|grep ashift

Code:
            ashift: 12
            ashift: 12

Thanks for the help!
 
Ok, so you got two pools with ashift 12. One is raidz2 with 8 disks and one raidz2 with 10 disks. I guess both pools use the default 8K volblocksize.

tank1 will loose 67% of the raw capacity to parity+padding so only 33% would be usable. And because a pool shouldn't be filled up more than 80% only 26,4% of the raw storage are actually usable. ZFS wont show this massive loss due to padding overhead because it only applies to zvols and not datasets. With 8K volblocksize everything you write to a Zvol on tank1 will be 225% in size. If you change the blocksize for that pool to 16K everything written to a zvol would only be 112.5% in size.

With tank2 it is basically the same. With 8K vollblocksize only 26,4% of the pools raw capacity are usable. With a Volblocksize of 8K everything written will be 240% in size, with a vollbocksize of 16K everything would be 120% in size and with a volblocksize of 64K everything would be 105% in size.

So you might want to read a bit about volblocksize and how padding works. Here is a good article and this table might be useful.
 
Last edited:
  • Like
Reactions: PvdL

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!