[SOLVED] Pool size reducing constantly, cant explain why.

Jan 23, 2021
12
3
8
55
Geneva, Switzerland
My local-zfs is on a 1 TB SSD, can't explain why Total size is constantly reducing, how could that be ? i was assuming total size always remain the same.
There are only 6 VM disks on there, and used size looks consistent as they are not full at all

I do nightly backups on a the separate backup storage (1 TB ext HD) so that cannot be backups.

However i notice 514GB taken on rpool/ROOT/pve-1, this is huge and can't determine what is causing this.
digging a bit more i find local storage to grow the same but that storage is not destination for backups.


Any help would be welcomed :) Thanks

1616173432001.png 1616173872802.png


1616172841981.png
1616172789080.png
1616173771446.png
1616172907671.png
 

Attachments

  • 1616172975699.png
    1616172975699.png
    21.1 KB · Views: 1
  • 1616172978891.png
    1616172978891.png
    21.1 KB · Views: 1
Last edited:
You seem to have different datasets in your rpool, therefore if one dataset grows it reduces the available space in the pool for other datasets.
 
Thank you, i have to stop backups since i believe that it has an impact, however i am a bit lost on how to correct this.
available space is less than 15% of my internal SSD , i think i will exhaust space in less than 3 days.

Any idea on the approach to take ?


I would like to have my
- vm disks on internal SSD (nvme0n1)
- ISO images and CT templates on SD CARD (sda1)
- backups on external drive (sdb1)


root@kphv:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 59.5G 0 disk
└─sda1 8:1 1 59.5G 0 part
sdb 8:16 0 931.5G 0 disk
└─sdb1 8:17 0 931.5G 0 part
zd0 230:0 0 48G 0 disk
zd16 230:16 0 48G 0 disk
├─zd16p1 230:17 0 549M 0 part
└─zd16p2 230:18 0 47.5G 0 part
zd32 230:32 0 48G 0 disk
├─zd32p1 230:33 0 549M 0 part
└─zd32p2 230:34 0 47.5G 0 part
zd48 230:48 0 48G 0 disk
├─zd48p1 230:49 0 549M 0 part
└─zd48p2 230:50 0 47.5G 0 part
zd64 230:64 0 48G 0 disk
zd80 230:80 0 48G 0 disk
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 1007K 0 part
├─nvme0n1p2 259:2 0 512M 0 part
└─nvme0n1p3 259:3 0 931G 0 part


root@kphv:~# pvesm zfsscan
rpool
rpool/ROOT
rpool/ROOT/pve-1
rpool/data


root@kphv:~# cat /etc/pve/storage.cfg
zfspool: local-zfs
pool rpool/data
content rootdir,images
sparse 1

dir: SDCARD
path /media
content iso,vztmpl
prune-backups keep-all=1
shared 0

dir: BACKUPS
path /backups
content backup,images
prune-backups keep-daily=15,keep-monthly=1,keep-yearly=1
shared 0

dir: local
path /var/lib/vz
content images
shared 0

root@kphv:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 748G 151G 104K /rpool
rpool/ROOT 658G 151G 96K /rpool/ROOT
rpool/ROOT/pve-1 658G 151G 658G /
rpool/data 89.2G 151G 96K /rpool/data
rpool/data/base-101-disk-0 17.7G 151G 17.7G -
rpool/data/base-199-disk-0 19.2G 151G 19.2G -
rpool/data/vm-111-disk-0 5.27G 151G 18.4G -
rpool/data/vm-112-disk-0 4.68G 151G 18.0G -
rpool/data/vm-113-disk-0 22.4G 151G 22.4G -

root@kphv:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 59.5G 0 disk
└─sda1 8:1 1 59.5G 0 part
sdb 8:16 0 931.5G 0 disk
└─sdb1 8:17 0 931.5G 0 part
zd0 230:0 0 48G 0 disk
zd16 230:16 0 48G 0 disk
├─zd16p1 230:17 0 549M 0 part
└─zd16p2 230:18 0 47.5G 0 part
zd32 230:32 0 48G 0 disk
├─zd32p1 230:33 0 549M 0 part
└─zd32p2 230:34 0 47.5G 0 part
zd48 230:48 0 48G 0 disk
├─zd48p1 230:49 0 549M 0 part
└─zd48p2 230:50 0 47.5G 0 part
zd64 230:64 0 48G 0 disk
zd80 230:80 0 48G 0 disk
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 1007K 0 part
├─nvme0n1p2 259:2 0 512M 0 part
└─nvme0n1p3 259:3 0 931G 0 part
 
Did you make sure that your sdb is correctly mounted at "/backups"? If "/backups" isn't a valid mountpoint it is just a folder and part of your root filesystem and the backups would be stored on your SSD.
 
Did you make sure that your sdb is correctly mounted at "/backups"? If "/backups" isn't a valid mountpoint it is just a folder and part of your root filesystem and the backups would be stored on your SSD.

Thank you ! that looks totally right, i don't see any mount point for the /backups or /media directory. I don't want to destroy the storage ID's again so will try to link paths with mountpoint.

1616323489434.png
 
I am not sure what link path with mount points means...

IMO:
Run "mount" and post the output here.
You have to edit the file /etc/fstab and set the correct mount points.
Move the content of /backups before to your external disk or the space currently used there will still be lost afterwards
 
  • Like
Reactions: KipiK
"lost" was maybe not the perfect description. I meant that the space that you are missing in your main pool will still be unusable if you don't move or delete the current content of /backup.
 
I am not sure what link path with mount points means...

IMO:
Run "mount" and post the output here.
You have to edit the file /etc/fstab and set the correct mount points.
Move the content of /backups before to your external disk or the space currently used there will still be lost afterwards
:) that meant nothing indeed : path was wrong and headed to nowhere.
Decided not to move the backups, my machines are up and fine, no changes for some time, so went for re-creation.

Since i am not a linux expert, i have followed up that procedure https://nubcakes.net/index.php/2019/03/05/how-to-add-storage-to-proxmox/#Step 3.

As a result:

root@kphv:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 74M 3.1G 3% /run
rpool/ROOT/pve-1 809G 1.5G 808G 1% /
tmpfs 16G 43M 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
rpool 808G 128K 808G 1% /rpool
rpool/ROOT 808G 128K 808G 1% /rpool/ROOT
rpool/data 808G 128K 808G 1% /rpool/data
/dev/fuse 30M 24K 30M 1% /etc/pve
/dev/sdb1 916G 19G 852G 3% /mnt/backups
tmpfs 3.2G 0 3.2G 0% /run/user/0

recreated the proper storage and path.

1616326175297.png

And voila ! happy :) , reconfigured backup task and all fine, backups are now heading to the proper disk.

1616326339131.png

Thank you all for putting me on the right track !
 
  • Like
Reactions: Dunuin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!