Can't create VM disk with almost all the available size of the zfspool

gerhardt

New Member
Mar 29, 2020
7
0
1
32
I create a zfs raidz1 with 3 x 4T disks and it shows that I have 7.04TiB available spaces.

微信截图_20200729144958.png

But when I try to add a new 6TiB(6144GiB) disk to a vm, it failed because of no space.

微信截图_20200729145123.png

How could that happen. That's a new raid I just created. When I lower the disk size to 5000GiB, it created successfully.

Code:
$ df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                   32G     0   32G   0% /dev
tmpfs                 6.3G   18M  6.3G   1% /run
/dev/mapper/pve-root   57G   18G   37G  32% /
tmpfs                  32G   43M   32G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                  32G     0   32G   0% /sys/fs/cgroup
/dev/nvme0n1p2        511M  312K  511M   1% /boot/efi
/dev/fuse              30M   20K   30M   1% /etc/pve
nas                   7.1T  128K  7.1T   1% /nas
tmpfs                 6.3G     0  6.3G   0% /run/user/0

Code:
$ zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
nas   10.9T  1.78M  10.9T        -         -     0%     0%  1.00x    ONLINE  -

Code:
$ zfs list
NAME   USED  AVAIL     REFER  MOUNTPOINT
nas    783K  7.04T      128K  /nas

Code:
$ zpool status -v
  pool: nas
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        nas         ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0

errors: No known data errors


Thank you in advance!
 
This is because on a raidz, parity data needs to be stored for each block in the zvol which is used for the VM disk. See this post [0] or this new section in the documentation [1].

If you run a zfs get all <pool>/path/to/vm-xxx-disk-y you should see a difference between the volsize property and the referenced size.

Additionally, please be aware that a ZFS pool should not be filled too much. It is `copy on write` and thus needs enough free space to be able to operate in a performant manner.

The rule of thumb is that it should not get fuller than 80% or performance will go down.



[0] https://forum.proxmox.com/threads/zfs-counts-double-the-space.71536/#post-320919
[1] https://lists.proxmox.com/pipermail/pve-devel/2020-July/044457.html (new version is not out yet, therefore a link to the patch)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!