local storage used instead of local-zfs

zuttu

New Member
Jul 6, 2023
3
0
1
hello all,

I have the proxmox ve 6.2-4.
The storage consist of 8 hdd united in a zfs raid10.

There is a VM performance problem - VM is not available, restarting or shutting down from the web interface does not work.
The problem is that there was not enough space in the local ( total size = used size).
I copy the data to disk VM101 which belongs to the local-zfs. but the total size in the local storage decreases and the used size does not changes. I have no idea why it happens. The disk image(vm-101-disk-1) is over 13TB full.

I don't know how to solve it. I hope for your help.

The Storage View:
1689155944606.png

The status of rpool :
1689156409919.png

Code:
zfs list
NAME                       USED  AVAIL     REFER  MOUNTPOINT
rpool                     23.7T  3.68G      175K  /rpool
rpool/ROOT                23.3G  3.68G      162K  /rpool/ROOT
rpool/ROOT/pve-1          23.3G  3.68G     23.3G  /
rpool/data                23.7T  3.68G      162K  /rpool/data
rpool/data/vm-100-disk-0  73.7G  3.68G     73.7G  -
rpool/data/vm-100-disk-1   575G  3.68G      575G  -
rpool/data/vm-101-disk-0  4.00G  3.68G     4.00G  -
rpool/data/vm-101-disk-1  22.9T  3.68G     22.9T  -
rpool/data/vm-102-disk-0  1.03G  3.68G     1.03G  -
rpool/data/vm-102-disk-1  98.9G  3.68G     98.9G  -
rpool/data/vm-103-disk-0  82.7G  3.68G     82.7G  -
rpool/data/vm-104-disk-0  6.66G  3.68G     6.66G  -

VM101:
1689161095295.png
 
First, you should consider updating to at least PVE 7.4 or even 8.0. PVE 6.4 is End-of-Life for quite somentime and you aren't even on that. So your server hadn't received any security fixes for over a year!

Second, "local" and "local-zfs" share the same space. If you delete stuff from local-zfs your size of local should also grow.

Third, a ZFS pool shouldn't be filled more than 80-90% or it will become very slow and might even fail when completely full. So you should delete some stuff or add more disks.

Forth, check that there are no snapshots and that discard/trim is working for all VMs so that freeing up space of deleted data isn't prevented. You could check that with zpool list -v && zfs list -o space. After deleting stuff you also might want to force a trim: fstrim -a && zpool trim rpool
 
Last edited:
thanks for your reply.

I cleared ~200GB, set discard for all disk images. Snapshots are not used. I ran fstrim -a && zpool trim rpool, but nothing happened.
I created rpool/data/vm-101-disk-1 with size 20T. zfs showing USED =22.8T. VM showing USED=13TB of 20TB. How is that possible?
At the moment, I can only write 250 Gig, the rest of the ~6TB space is not available for use.

Code:
zpool list -v && zfs list -o space
NAME                                         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool                                       29.1T  27.9T  1.20T        -         -    19%    95%  1.00x    ONLINE  -
  raidz1                                    29.1T  27.9T  1.20T        -         -    19%  95.9%      -  ONLINE  
    ata-ST4000NM002A-2HZ101_WJG1H8SS-part3      -      -      -        -         -      -      -      -  ONLINE  
    ata-ST4000NM002A-2HZ101_WJG1NTWV-part3      -      -      -        -         -      -      -      -  ONLINE  
    ata-ST4000NM002A-2HZ101_WJG1GV9T-part3      -      -      -        -         -      -      -      -  ONLINE  
    ata-ST4000NM002A-2HZ101_WJG1GQYD-part3      -      -      -        -         -      -      -      -  ONLINE  
    ata-ST4000NM002A-2HZ101_WJG1MNQ7-part3      -      -      -        -         -      -      -      -  ONLINE  
    ata-ST4000NM002A-2HZ101_WJG1GQF9-part3      -      -      -        -         -      -      -      -  ONLINE  
    ata-ST4000NM002A-2HZ101_WJG1H8SW-part3      -      -      -        -         -      -      -      -  ONLINE  
    ata-ST4000NM002A-2HZ101_WJG1L0A0-part3      -      -      -        -         -      -      -      -  ONLINE  
NAME                      AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
rpool                      252G  23.5T        0B    175K             0B      23.5T
rpool/ROOT                 252G  23.3G        0B    162K             0B      23.3G
rpool/ROOT/pve-1           252G  23.3G        0B   23.3G             0B         0B
rpool/data                 252G  23.5T        0B    162K             0B      23.5T
rpool/data/vm-100-disk-0   252G  73.6G        0B   73.6G             0B         0B
rpool/data/vm-100-disk-1   252G   575G        0B    575G             0B         0B
rpool/data/vm-101-disk-0   252G  2.43G        0B   2.43G             0B         0B
rpool/data/vm-101-disk-1   252G  22.8T        0B   22.8T             0B         0B
 
Search this forum for "padding overhead". You will find dozens of posts of me explaining that.

In short: When storing VM (or better its zvols) on a raidz1/2/3 ZFS pool everything will be way bigger because of padding overhead, if your zvols were created with a too low volblocksize.
Solution: Set the "Block size" of your ZFS storage to at least 32K for a 8 disk raidz1 created with a ashift=12, instead of the default 8K. Then destroy all VMs and recreate them. Easiest way to do this is by doing a backup/restore.
 
Last edited:
Search this forum for "padding overhead". You will find dozens of posts of me explaining that.

In short: When storing VM (or better its zvols) on a raidz1/2/3 ZFS pool everything will be way bigger because of padding overhead, if your zvols were created with a too low volblocksize.
Solution: Set the "Block size" of your ZFS storage to at least 32K for a 8 disk raidz1 created with a ashift=12, instead of the default 8K. Then destroy all VMs and recreate them. Easiest way to do this is by doing a backup/restore.
thanks a lot
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!