Data Storage Size Problem

Apr 11, 2023
18
1
3
Canada
Hi,

I am not sure what I did wrong. I had copied a VM from VMWare to the Proxmox Storage (about 6 TB) then I converted it to RAW and expanded the disk to 10 TB for that VM. I then deleted the the copied file (using RM from Shell) bit if I go to the Summary for that storage it seems that the deleted space is not reflected. Not sure how to correct this as I need that space for an other VM. Here are some screenshots.
 

Attachments

  • Proxmox_Storage1.png
    Proxmox_Storage1.png
    359.7 KB · Views: 6
  • Proxmox_Storage2.png
    Proxmox_Storage2.png
    349.9 KB · Views: 6
  • Proxmox_Storage3.png
    Proxmox_Storage3.png
    106.5 KB · Views: 6
Whats the output of zfs list? Ideally copy it into the post inside of [code][/code] blocks. Or use the </> Icon at the top to paste the output into.
 
Whats the output of zfs list? Ideally copy it into the post inside of [code][/code] blocks. Or use the </> Icon at the top to paste the output into.
Code:
NAME                                USED  AVAIL  REFER  MOUNTPOINT
PVE02_LIQUID_RAID10                1.18T  2.18T    96K  /PVE02_LIQUID_RAID10
PVE02_LIQUID_RAID10/vm-301-disk-0  81.3G  2.21T  54.0G  -
PVE02_LIQUID_RAID10/vm-301-disk-1  50.8G  2.20T  33.2G  -
PVE02_LIQUID_RAID10/vm-301-disk-2  60.9G  2.22T  23.6G  -
PVE02_LIQUID_RAID10/vm-302-disk-0   102G  2.22T  61.4G  -
PVE02_LIQUID_RAID10/vm-302-disk-1   914G  2.48T   608G  -
PVE02_RAIDZ                        19.1T  5.24T   175K  /PVE02_RAIDZ
PVE02_RAIDZ/subvol-374-disk-0       492M  7.52G   492M  /PVE02_RAIDZ/subvol-374-disk-0
PVE02_RAIDZ/vm-101-disk-0           192G  5.31T   121G  -
PVE02_RAIDZ/vm-104-disk-0           128G  5.29T  83.0G  -
PVE02_RAIDZ/vm-105-disk-0           128G  5.28T  95.1G  -
PVE02_RAIDZ/vm-106-disk-0           128G  5.28T  91.0G  -
PVE02_RAIDZ/vm-306-disk-0          89.5G  5.28T  49.2G  -
PVE02_RAIDZ/vm-308-disk-0           115G  5.28T  73.9G  -
PVE02_RAIDZ/vm-308-disk-1           153G  5.28T   114G  -
PVE02_RAIDZ/vm-308-disk-2          51.2G  5.28T  15.8G  -
PVE02_RAIDZ/vm-310-disk-0          89.5G  5.29T  39.1G  -
PVE02_RAIDZ/vm-311-disk-0           262G  5.39T   113G  -
PVE02_RAIDZ/vm-312-disk-0          81.8G  5.26T  63.4G  -
PVE02_RAIDZ/vm-313-disk-0           128G  5.33T  42.2G  -
PVE02_RAIDZ/vm-314-disk-0          89.5G  5.29T  40.7G  -
PVE02_RAIDZ/vm-350-disk-0           320G  5.42T   138G  -
PVE02_RAIDZ/vm-351-disk-0           256G  5.44T  50.2G  -
PVE02_RAIDZ/vm-352-disk-0          3.26M  5.24T   216K  -
PVE02_RAIDZ/vm-352-disk-1           320G  5.50T  57.2G  -
PVE02_RAIDZ/vm-352-disk-2          7.05M  5.24T   108K  -
PVE02_RAIDZ/vm-355-disk-0          16.0T  7.75T  13.4T  -
PVE02_RAIDZ/vm-370-disk-0           134G  5.29T  84.3G  -
PVE02_RAIDZ/vm-372-disk-0          95.9G  5.29T  46.5G  -
PVE02_RAIDZ/vm-373-disk-0           102G  5.28T  63.9G  -
PVE02_RAIDZ/vm-376-disk-0          3.26M  5.24T   189K  -
PVE02_RAIDZ/vm-376-disk-1           320G  5.37T   186G  -
PVE02_RAIDZ/vm-376-disk-2          7.05M  5.24T   108K  -
PVE02_RAIDZ/vm-377-disk-0          32.0G  5.27T  4.85G  -
PVE02_RAIDZ/vm-390-disk-0          40.9G  5.28T  1.05G  -
 
Code:
NAME                                USED  AVAIL  REFER  MOUNTPOINT
[…]
PVE02_RAIDZ/vm-355-disk-0          16.0T  7.75T  13.4T  -
There are the 17T.

Since this is a raidz, can you also show the zpool status PVE02_RAIDZ output? And the volblocksize of the disk image is of interest: zfs get volblocksize PVE02_RAIDZ/vm-355-disk-0
 
I don't understand as my disk size for 355 is 9751758M. How can the zie be bigger than the drive size?

Code:
root@pve02:/# zpool status PVE02_RAIDZ
  pool: PVE02_RAIDZ
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 07:38:39 with 0 errors on Sun Feb  9 08:02:41 2025
config:

        NAME                                         STATE     READ WRITE CKSUM
        PVE02_RAIDZ                                  ONLINE       0     0     0
          raidz1-0                                   ONLINE       0     0     0
            nvme-WD_Red_SN700_4000GB_23062G800130    ONLINE       0     0     0
            nvme-WD_Red_SN700_4000GB_23030H800201    ONLINE       0     0     0
            nvme-WD_Red_SN700_4000GB_23062G800275    ONLINE       0     0     0
            nvme-WD_Red_SN700_4000GB_23030H801443    ONLINE       0     0     0
            nvme-WD_Red_SN700_4000GB_23062G800271    ONLINE       0     0     0
            nvme-WD_Red_SN700_4000GB_23030H800029_1  ONLINE       0     0     0
            nvme-WD_Red_SN700_4000GB_23030H800035_1  ONLINE       0     0     0
            nvme-WD_Red_SN700_4000GB_23030H801242_1  ONLINE       0     0     0

errors: No known data errors
root@pve02:/# zfs get volblocksize PVE02_RAIDZ/vm-355-disk-0
NAME                       PROPERTY      VALUE     SOURCE
PVE02_RAIDZ/vm-355-disk-0  volblocksize  8K        -
 
Here is from inside the VM

Code:
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0   64M  1 loop /snap/core20/2379
loop1                       7:1    0 91.9M  1 loop /snap/lxd/29619
loop2                       7:2    0 91.9M  1 loop /snap/lxd/24061
loop3                       7:3    0 63.7M  1 loop /snap/core20/2434
loop4                       7:4    0 44.3M  1 loop /snap/snapd/23258
loop5                       7:5    0 44.4M  1 loop /snap/snapd/23545
sda                         8:0    0  9.3T  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    2G  0 part /boot
└─sda3                      8:3    0  9.3T  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0  9.3T  0 lvm  /
sr0                        11:0    1 1024M  0 rom
 
How is it possible to have different remaining space available for the same filesystem? @aaron I've never seen this before... what does it mean?
You mean the "AVAIL" column? This happens if some disk images, or in ZFS terminology, ZVOL datasets have a reservation / refreservation set. If the Proxmox VE storage config for it doesn't have the "thin provision" checkbox set, the datasets are created "thick" which for ZFS means, they get a reservation.
To convert a thick provisioned ZVOL into a thin one, one needs to remove the reservations:
Code:
zfs set reservation=none refreservation=none {zfspool}/vm-X-disk-Y

@JSChasle It seems that the situation explained in the admin guide is hitting you here (https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_zfs_raid_considerations).

The disk image has a volblocksize of 8k, and for each 8k, one block of parity is written in a raidz1. I assume that the pool has an ashift of 12, which means a physical block size of 4k (2^12=4096). Therefore, you have one 4k block parity (smallest physical block possible) per 8k of disk image data.

Given that the disk image is 10T, the currently used is 16T which is roughly 1.5x the 10T, given some leeway for additional metadata overhead and rounding errors.

This situation has gotten better as with recent ZFS versions, the default volblocksize is 16k, therefore, the impact of the additional parity blocks is a lot lower. If you are still on an older Proxmox VE version, you can override the volblocksize in the storage config. It will only affect newly created disk images though and you cannot change it for an existing one. Therefore, changing the volblocksize and moving the disk image from this storage to another one, and then back, should create it with the larger volblocksize.
 
  • Like
Reactions: alexskysilk