very odd issue with storage space?

killmasta93

Renowned Member
Aug 13, 2017
973
58
68
31
Hi,
I was wondering if someone else has had this issue before. So recently got an alert about 83% used in storage so I checked and i saw the local-zfs storage was fine, but then i looked closely and saw that the storage was 500gigs, not sure why if i have 4 disks of 1tb. I did create another storage called vmbackups with another proxmox hosts sends using pve-zsync to that storage but not sure how it got confused there.
Thank you

https://imgur.com/a/GnAP8uN

Code:
root@prometheus4:~# zfs list -o space
NAME                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
rpool                          266G  1.50T        0B    104K             0B      1.50T
rpool/ROOT                     266G   273G        0B     96K             0B       273G
rpool/ROOT/pve-1               266G   273G        0B    273G             0B         0B
rpool/data                     266G   286G        0B     96K             0B       286G
rpool/data/vm-105-disk-1       266G  10.1G     1.66G   8.42G             0B         0B
rpool/data/vm-107-disk-1       266G  43.2G     3.67G   39.5G             0B         0B
rpool/data/vm-107-disk-2       266G   233G      304M    232G             0B         0B
rpool/swap                     271G  8.50G        0B   3.04G          5.46G         0B
rpool/vmbackup                34.8G   965G        0B     96K             0B       965G
rpool/vmbackup/vm-101-disk-1  34.8G   103G     6.64G   96.1G             0B         0B
rpool/vmbackup/vm-101-disk-2  34.8G   862G     19.0G    842G             0B       997M
root@prometheus4:~# lsblk
NAME     MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda        8:0    0 931.5G  0 disk
|-sda1     8:1    0  1007K  0 part
|-sda2     8:2    0 931.5G  0 part
`-sda9     8:9    0     8M  0 part
sdb        8:16   0 931.5G  0 disk
|-sdb1     8:17   0  1007K  0 part
|-sdb2     8:18   0 931.5G  0 part
`-sdb9     8:25   0     8M  0 part
sdc        8:32   0 931.5G  0 disk
|-sdc1     8:33   0 931.5G  0 part
`-sdc9     8:41   0     8M  0 part
sdd        8:48   0 931.5G  0 disk
|-sdd1     8:49   0 931.5G  0 part
`-sdd9     8:57   0     8M  0 part
zd0      230:0    0     8G  0 disk [SWAP]
zd16     230:16   0   128G  0 disk
|-zd16p1 230:17   0   350M  0 part
`-zd16p2 230:18   0 127.7G  0 part
zd32     230:32   0   128G  0 disk
|-zd32p1 230:33   0     1M  0 part
|-zd32p2 230:34   0   256M  0 part
`-zd32p3 230:35   0 127.8G  0 part
zd48     230:48   0   250G  0 disk
`-zd48p1 230:49   0   250G  0 part
zd64     230:64   0  1000G  0 disk
`-zd64p1 230:65   0  1000G  0 part
zd80     230:80   0   250G  0 disk
|-zd80p1 230:81   0   350M  0 part
`-zd80p2 230:82   0 249.7G  0 part
 
you currently have one zpool - rpool
on that you created a dataset rpoo/vmbackups which uses 965 G
This gets counted to the used space of rpool (because it is on the same disks)

I hope this helps!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!