Difference in my storage after reboot

Kyar

New Member
Sep 9, 2022
29
0
1
Hello,

After rebooting my Proxmox, I noticed a discrepancy in my local storage:

Code:
nvme0n1     259:0    0 931.5G  0 disk
├─nvme0n1p1 259:1    0  1007K  0 part
├─nvme0n1p2 259:2    0     1G  0 part
└─nvme0n1p3 259:3    0 930.5G  0 part


My disk is a 1TB NVMe (930GB).

Currently, my local directory is at 771GB, and my local-zfs is at 732GB.

Before the reboot, I had around 9xx GB.

I'm now left with 200GB. How is this possible?
 
Hello,

After rebooting my Proxmox, I noticed a discrepancy in my local storage:

Code:
nvme0n1     259:0    0 931.5G  0 disk
├─nvme0n1p1 259:1    0  1007K  0 part
├─nvme0n1p2 259:2    0     1G  0 part
└─nvme0n1p3 259:3    0 930.5G  0 part


My disk is a 1TB NVMe (930GB).

Currently, my local directory is at 771GB, and my local-zfs is at 732GB.

Before the reboot, I had around 9xx GB.

I'm now left with 200GB. How is this possible?
Hi,
from the information at hand it is not clear which sizes exactly you are referring to, you only showed the output of lsblk, which shows us the partitions and their size from a single nvme drive, but nothing related to the filesystems on those partitions.

Please provide the output of zfs list -o space, zpool list -v as well as the current storage config cat /etc/pve/storage.cfg.

Further, you might want to provide the output of df -h, if you are referring to storage space not located on the zfs pool.
 
Hi,
from the information at hand it is not clear which sizes exactly you are referring to, you only showed the output of lsblk, which shows us the partitions and their size from a single nvme drive, but nothing related to the filesystems on those partitions.

Please provide the output of zfs list -o space, zpool list -v as well as the current storage config cat /etc/pve/storage.cfg.

Further, you might want to provide the output of df -h, if you are referring to storage space not located on the zfs pool.
Code:
root@PROXMOX:~# zfs list -o space
NAME                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
HOME                          2.83T  4.31T        0B    160K             0B      4.31T
HOME/vm-100-disk-0            2.83T  4.31T        0B   4.31T             0B         0B
rpool                          500G   399G        0B    104K             0B       399G
rpool/ROOT                     500G   217G        0B     96K             0B       217G
rpool/ROOT/pve-1               500G   217G        0B    217G             0B         0B
rpool/data                     500G   183G        0B    152K             0B       183G
rpool/data/base-117-disk-0     500G   152K        8K    144K             0B         0B
rpool/data/base-117-disk-1     500G  48.9G        8K   48.9G             0B         0B
rpool/data/subvol-105-disk-0  3.90G  1.10G        0B   1.10G             0B         0B
rpool/data/subvol-106-disk-0  2.90G  2.10G        0B   2.10G             0B         0B
rpool/data/subvol-107-disk-0  4.02G  1.12G      138M   1006M             0B         0B
rpool/data/subvol-108-disk-0  3.48G  1.52G        0B   1.52G             0B         0B
rpool/data/subvol-109-disk-0  13.5G  1.51G        0B   1.51G             0B         0B
rpool/data/subvol-110-disk-0  13.1G  1.94G        0B   1.94G             0B         0B
rpool/data/subvol-111-disk-0  4.40G   612M        0B    612M             0B         0B
rpool/data/subvol-112-disk-0  4.13G   889M        0B    889M             0B         0B
rpool/data/subvol-113-disk-0  47.2G  2.78G        0B   2.78G             0B         0B
rpool/data/subvol-114-disk-0  28.6G  3.86G      471M   3.40G             0B         0B
rpool/data/vm-100-disk-0       500G  13.9G        0B   13.9G             0B         0B
rpool/data/vm-101-disk-0       500G  2.63G        0B   2.63G             0B         0B
rpool/data/vm-101-disk-1       500G    56K        0B     56K             0B         0B
rpool/data/vm-102-disk-0       500G  10.7G        0B   10.7G             0B         0B
rpool/data/vm-103-disk-0       500G    56K        0B     56K             0B         0B
rpool/data/vm-103-disk-1       500G  22.9G        0B   22.9G             0B         0B
rpool/data/vm-104-disk-0       500G  3.51G        0B   3.51G             0B         0B
rpool/data/vm-115-disk-0       500G   104K        0B    104K             0B         0B
rpool/data/vm-115-disk-1       500G  56.1G        0B   56.1G             0B         0B
rpool/data/vm-116-disk-0       500G  1.71G        0B   1.71G             0B         0B
rpool/data/vm-118-disk-0       500G  4.64G      945M   3.72G             0B         0B
rpool/data/vm-118-state-disk   500G   252M        0B    252M             0B         0B

Code:
root@PROXMOX:~# zpool list -v
NAME                                SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
HOME                               10.9T  6.47T  4.43T        -         -     1%    59%  1.00x    ONLINE  -
  raidz1-0                         10.9T  6.47T  4.43T        -         -     1%  59.3%      -    ONLINE
    sdb                            3.64T      -      -        -         -      -      -      -    ONLINE
    sdc                            3.64T      -      -        -         -      -      -      -    ONLINE
    sdd                            3.64T      -      -        -         -      -      -      -    ONLINE
rpool                               928G   399G   529G        -         -    29%    43%  1.00x    ONLINE  -
  nvme-eui.0025385431b39149-part3   931G   399G   529G        -         -    29%  43.0%      -    ONLINE


Code:
root@PROXMOX:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,snippets,rootdir,backup,images,iso
        shared 0

zfspool: local-zfs
        pool rpool/data
        content images,rootdir
        sparse 1

zfspool: HOME
        pool HOME
        content images,rootdir
        mountpoint /HOME
        sparse 1

dir: SSD
        path /mnt/SSD
        content images
        prune-backups keep-all=1
        shared 0

nfs: NAS
        export /volume2/NAS
        path /mnt/pve/NAS
        server NAS
        content rootdir
        prune-backups keep-all=1

nfs: BACKUP
        export /volume2/BACKUP/PROXMOX
        path /mnt/pve/BACKUP
        server NAS
        content vztmpl,rootdir,snippets,iso,images,backup
        prune-backups keep-all=1

pbs: PBS
        datastore BACKUP-PBS
        server 10.0.0.18
        content backup
        fingerprint 
        prune-backups keep-all=1
        username root@pam
Code:
root@PROXMOX:~# df -h
Sys. de fichiers             Taille Utilisé Dispo Uti% Monté sur
udev                            32G       0   32G   0% /dev
tmpfs                          6,3G    1,5M  6,3G   1% /run
rpool/ROOT/pve-1               717G    217G  500G  31% /
tmpfs                           32G     46M   32G   1% /dev/shm
tmpfs                          5,0M       0  5,0M   0% /run/lock
/dev/sda                       440G    224G  193G  54% /mnt/SSD
rpool                          500G    128K  500G   1% /rpool
rpool/ROOT                     500G    128K  500G   1% /rpool/ROOT
rpool/data                     500G    256K  500G   1% /rpool/data
rpool/data/subvol-113-disk-0    50G    2,8G   48G   6% /rpool/data/subvol-113-disk-0
rpool/data/subvol-110-disk-0    15G    2,0G   14G  13% /rpool/data/subvol-110-disk-0
rpool/data/subvol-107-disk-0   5,0G   1006M  4,1G  20% /rpool/data/subvol-107-disk-0
rpool/data/subvol-105-disk-0   5,0G    1,2G  3,9G  23% /rpool/data/subvol-105-disk-0
rpool/data/subvol-111-disk-0   5,0G    613M  4,5G  12% /rpool/data/subvol-111-disk-0
rpool/data/subvol-109-disk-0    15G    1,6G   14G  11% /rpool/data/subvol-109-disk-0
rpool/data/subvol-112-disk-0   5,0G    889M  4,2G  18% /rpool/data/subvol-112-disk-0
rpool/data/subvol-114-disk-0    32G    3,4G   29G  11% /rpool/data/subvol-114-disk-0
rpool/data/subvol-108-disk-0   5,0G    1,6G  3,5G  31% /rpool/data/subvol-108-disk-0
HOME                           2,9T    256K  2,9T   1% /HOME
/dev/fuse                      128M     44K  128M   1% /etc/pve
NAS:/volume2/BACKUP/PROXMOX    7,3T    3,0T  4,3T  42% /mnt/pve/BACKUP
NAS:/volume2/NAS               7,3T    3,0T  4,3T  42% /mnt/pve/NAS
rpool/data/subvol-106-disk-0   5,0G    2,1G  3,0G  42% /rpool/data/subvol-106-disk-0
tmpfs                          6,3G       0  6,3G   0% /run/user/0

Thanks for your reply, here all my config
 
Last edited:
I'm now left with 200GB. How is this possible?
I am still not understanding which value and storage you are referring to. Storage local has about 500G free space according to the output you showed. Since it resides on the same zfs pool named rpool, it shares the same total available space as the VMs stored on local-zfs if that is what you are referring to.
 
I am still not understanding which value and storage you are referring to. Storage local has about 500G free space according to the output you showed. Since it resides on the same zfs pool named rpool, it shares the same total available space as the VMs stored on local-zfs if that is what you are referring to.
Look :
1695999153666.png

and local-zfs :

1695999179123.png

It's on the same pool and one have 769 and one other 732go instead 930go

Do you understand now ?
 
It's on the same pool and one have 769 and one other 732go instead 930go

Do you understand now ?
Everything is fine and as expected. Both "local" and "local-zfs" share the same space.
You got 899G of total usable storage. Your root filesystem (and with that "local") is using 217G. Your "local-zfs" is using 183G. 500G is free.
With 183G used by "local-zfs" your "local" got 217G of 716G used because 899G - 183G = 716G.
With 217G used by "local" your "local-zfs" got 183G of 682G used because 899G - 217G = 682G.

So every time "local" uses up more space the usable size of "local-zfs" will shrink and vice versa.
 
Last edited:
Oh, I understand now.

The total space is constantly changing. This means that when I've used up 500GB, only "400GB" will be displayed as the total left, no ?
 
The total space is constantly changing.
Yes. The total space of the whole pool is always the same. Its more about the point of view.
Lets say those two storages "local" and "local-zfs" are balloons and the ZFS pool is a box. Now you place those two balloons in that box. Air inside a balloon is used space and air outside the balloons is free space. From the point of view of the box it is always the same size. But when you inflate the first balloon, from the point of view of the second balloon it looks like space around it will become narrower so it got less potential space to expand.
You can distribute the air inside the box between the balloons as you like. Just make sure to not inflate the ballons too much, so the combined air volume of both balloons isn't more than the volume of the box, so they won't crack the the box open.
This means that when I've used up 500GB, only "400GB" will be displayed as the total left, no ?
From the point of view of the pool and what "zfs list -o space" will report, yes.
But the webUI will show that both storages got each 400GB of free space.
So one might tell 300 of 700GB are used, so 400GB available. And the other one could tell 200 of 600GB are used, so again 400GB available.
Doesn't mean you can actually store another 800GB. Best you read a bit about "thin-provisioning" to better understand the concept.
 
Last edited:
  • Like
Reactions: Kyar

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!