[SOLVED] ZFS pool almost full despite reducing zvol size

jpaugh

New Member
Sep 15, 2022
4
0
1
I have a PVE host that is currently reporting that "local-zfs" is nearly full. There is only one guest with one virtual disk stored in this pool; the virtual disk was originally 50TB and I have since reduced it to 43TB using
Bash:
zfs set volsize=43T rpool/data/vm-301-disk-0
. Anyone experience this before, or can share some advice on how to fix this?
 

Attachments

  • guest-hardware-settings.png
    guest-hardware-settings.png
    6.6 KB · Views: 10
  • local-zfs.png
    local-zfs.png
    22.1 KB · Views: 9
  • rpool.png
    rpool.png
    7.7 KB · Views: 9
  • zfs-list.png
    zfs-list.png
    38.8 KB · Views: 9
  • disks.png
    disks.png
    94 KB · Views: 9
  • more-pool-data.png
    more-pool-data.png
    31.3 KB · Views: 9
  • zpool-status-rpool.png
    zpool-status-rpool.png
    67.2 KB · Views: 10
Ahhh. It's probably worth checking if there are any snapshots of the rpool ZFS pool hanging around.

Try running zfs list -t snapshot -r rpool and seeing if it returns anything.

If there are snapsnots, that'll probably be why you're not seeing the newly freed up space be available.

To destroy a snapshot, then (one at a time) run zfs destroy NAME_OF_SNAPSHOT. ie zfs destroy rpool/data/something@something_else
 
Ahhh. It's probably worth checking if there are any snapshots of the rpool ZFS pool hanging around.

Try running zfs list -t snapshot -r rpool and seeing if it returns anything.

If there are snapsnots, that'll probably be why you're not seeing the newly freed up space be available.

To destroy a snapshot, then (one at a time) run zfs destroy NAME_OF_SNAPSHOT. ie zfs destroy rpool/data/something@something_else
Thank you for reply, there are no snapshots listed with the above command (output attached).
 

Attachments

  • snapshot-list.png
    snapshot-list.png
    23.4 KB · Views: 4
A 4-drive RAIDz1 only has about 50% usable space because of the volblocksize mismatch. As discussed before: RAIDz1 is not like hardware RAID5. There is a lot of padding (and write multiplication, which it also makes it slow for running VMs) and ZFS does not show the expected usable size (when it's just created) which catches users by surprise.
 
  • Like
Reactions: jpaugh
A 4-drive RAIDz1 only has about 50% usable space because of the volblocksize mismatch. As discussed before: RAIDz1 is not like hardware RAID5. There is a lot of padding (and write multiplication, which it also makes it slow for running VMs) and ZFS does not show the expected usable size (when it's just created) which catches users by surprise.
I checked out the OpenZFS documentation to reaffirm what you are saying and I think I understand it better now, and will likely use this chart over the PVE GUI in the future: https://openzfs.github.io/openzfs-docs/Basic Concepts/RAIDZ.html.

I wrongfully assumed that going from 80TB to 60TB was the total cost of the parity because one disk worth of capacity made sense from a RAID5 perspective. However, as you pointed out and I checked with the OpenZFS documentation this is not the case, and my fears of write-amplification are confirmed.

Thank you for your time and input, I appreciate having a second set of eyes look at this problem for me!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!