[SOLVED] Accidentally backed up when NFS share was unmounted, "local" full

nightchrono

New Member
Nov 24, 2025
5
0
1
I had a long project this weekend of adding two additional nodes to my cluster (fresh with PVE 9) and upgrading my existing node from PVE 8 and PBS 3. The upgrade and clustering went mostly fine. One thing that happened was my NFS share of my synology NAS where I dump my backups got unmounted. Not realizing the share was unmounted and thinking I had lost backups in the upgrade, I attempted to just run my backup job. Got errors that the disk was full and that is when I realized it had become unmounted. For some reason instead of going out via the vmbr0 bridge with a static IP that had been allowed on the NFS permissions, it was attempting to communicate using the newly created cluster network IP, which was not whitelisted. But that's a problem for another day.

So my issue is that the backups went to the wrong place. I unmounted the share, and did a rm -r on the /mnt/pbs/Synology directory and removed all files that were mistakenly written there. I then reboot the node and verified that all the mounts automatically came back (they did.) All my actual backups were still in place on the NAS.

However, my "local" is still full.

I unmounted the drive again, and did "du -hs /mnt/pbs/*" and I am showing 50G of usage in the "Synology" folder, but when I run an "ls" on the folder there is nothing.

What am I missing for this consumed space, and how do I reclaim it?

Thanks to anyone who can help.
 

Attachments

  • Screenshot 2025-11-24 111232.png
    Screenshot 2025-11-24 111232.png
    33.6 KB · Views: 5
I had a long project this weekend of adding two additional nodes to my cluster (fresh with PVE 9) and upgrading my existing node from PVE 8 and PBS 3. The upgrade and clustering went mostly fine. One thing that happened was my NFS share of my synology NAS where I dump my backups got unmounted. Not realizing the share was unmounted and thinking I had lost backups in the upgrade, I attempted to just run my backup job. Got errors that the disk was full and that is when I realized it had become unmounted. For some reason instead of going out via the vmbr0 bridge with a static IP that had been allowed on the NFS permissions, it was attempting to communicate using the newly created cluster network IP, which was not whitelisted. But that's a problem for another day.

So my issue is that the backups went to the wrong place. I unmounted the share, and did a rm -r on the /mnt/pbs/Synology directory and removed all files that were mistakenly written there. I then reboot the node and verified that all the mounts automatically came back (they did.) All my actual backups were still in place on the NAS.

However, my "local" is still full.

I unmounted the drive again, and did "du -hs /mnt/pbs/*" and I am showing 50G of usage in the "Synology" folder, but when I run an "ls" on the folder there is nothing.

What am I missing for this consumed space, and how do I reclaim it?

Thanks to anyone who can help.
What does ls -la /mnt/pbs/Synology show ?
And du -h /mnt/pbs/Synology ?

Phil
 
Last edited:
What does ls -la /mnt/pbs/Synology show ?
And du -h /mnt/pbs/Synology ?

Phil

Code:
root@pvx:/mnt/pbs/Synology# ls -la /mnt/pbs/Synology
total 1056
drwxrwxr-x 3 backup backup    4096 Nov 24 10:12 .
drwxr-xr-x 3 root   root      4096 Apr 14  2025 ..
drwxr-x--- 1 backup backup 1064960 Apr 14  2025 .chunks
-rw-r--r-- 1 backup backup     292 May 26 00:00 .gc-status
-rw-r--r-- 1 backup backup       0 Apr 14  2025 .lock


There was a lot of output. I am giving a snippet that I think is the useful part for the sake of thread size, but there was a lot more in the same vein. I can split it up and post the whole thing if needed.

Code:
984K    /mnt/pbs/Synology/.chunks/fff4
4.0K    /mnt/pbs/Synology/.chunks/fff5
4.0K    /mnt/pbs/Synology/.chunks/fff6
2.1M    /mnt/pbs/Synology/.chunks/fff7
2.1M    /mnt/pbs/Synology/.chunks/fff8
4.0K    /mnt/pbs/Synology/.chunks/fff9
4.0K    /mnt/pbs/Synology/.chunks/fffa
4.0K    /mnt/pbs/Synology/.chunks/fffb
4.0K    /mnt/pbs/Synology/.chunks/fffc
4.0K    /mnt/pbs/Synology/.chunks/fffd
4.0K    /mnt/pbs/Synology/.chunks/fffe
4.0K    /mnt/pbs/Synology/.chunks/ffff
50G     /mnt/pbs/Synology/.chunks
50G     /mnt/pbs/Synology
 
It would also be interesting to see the output of the following commands (as Text encoded with CODE tags not screenshots):
mount
df -h
cat /etc/fstab
cat /etc/pve/storage.cfg
pvesm status
du -xd1 /mnt


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox


Code:
root@pvx:/mnt/pbs/Synology# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=230392600k,nr_inodes=57598150,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=600,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=46086652k,mode=755,inode64)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=37,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=71609)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,nosuid,nodev,relatime,pagesize=2M)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run/credentials/systemd-journald.service type tmpfs (ro,nosuid,nodev,noexec,relatime,nosymfollow,size=1024k,nr_inodes=1024,mode=700,inode64,noswap)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,size=230433260k,nr_inodes=1048576,inode64)
/dev/sda1 on /mnt/pve/Storage type ext4 (rw,relatime)
Pool2 on /Pool2 type zfs (rw,relatime,xattr,noacl,casesensitive)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
tmpfs on /run/credentials/getty@tty1.service type tmpfs (ro,nosuid,nodev,noexec,relatime,nosymfollow,size=1024k,nr_inodes=1024,mode=700,inode64,noswap)
192.168.50.246:/mnt/VM-Pool/VM_Storage on /mnt/pve/VM-Storage type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,fatal_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.50.120,local_lock=none,addr=192.168.50.246)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=46086648k,nr_inodes=11521662,mode=700,inode64)
tracefs on /sys/kernel/debug/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
192.168.50.249:/volume1/VM-Backup on /mnt/pve/Synology type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,fatal_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.50.249,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.50.249)

Code:
root@pvx:/mnt/pbs/Synology# df -h
Filesystem                              Size  Used Avail Use% Mounted on
udev                                    220G     0  220G   0% /dev
tmpfs                                    44G  5.4M   44G   1% /run
/dev/mapper/pve-root                     68G   65G     0 100% /
tmpfs                                   220G   72M  220G   1% /dev/shm
tmpfs                                   5.0M     0  5.0M   0% /run/lock
tmpfs                                   1.0M     0  1.0M   0% /run/credentials/systemd-journald.service
tmpfs                                   220G   72K  220G   1% /tmp
/dev/sda1                               1.8T   18G  1.7T   2% /mnt/pve/Storage
Pool2                                   6.5T  256K  6.5T   1% /Pool2
/dev/fuse                               128M   52K  128M   1% /etc/pve
tmpfs                                   1.0M     0  1.0M   0% /run/credentials/getty@tty1.service
192.168.50.246:/mnt/VM-Pool/VM_Storage  5.0T  490G  4.5T  10% /mnt/pve/VM-Storage
tmpfs                                    44G  8.0K   44G   1% /run/user/0
192.168.50.249:/volume1/VM-Backup        21T  2.9T   19T  14% /mnt/pve/Synology

Code:
root@pvx:/mnt/pbs/Synology# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
192.168.50.249:/volume1/PBS /mnt/pbs/Synology nfs vers=3,nouser,atime,auto,retrans=2,rw,dev,exec 0 0

Code:
root@pvx:/mnt/pbs/Synology# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,backup,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

nfs: Synology
        export /volume1/VM-Backup
        path /mnt/pve/Synology
        server 192.168.50.249
        content backup
        options vers=3
        prune-backups keep-all=1

pbs: PBS
        datastore Synology
        server 192.168.50.100
        content backup
        fingerprint (Not sure if this is secret, can post if requested)
        port 8007
        prune-backups keep-all=1
        username backup@pbs

zfspool: Pool2
        pool Pool2
        content images,rootdir
        mountpoint /Pool2
        nodes pvx
        sparse 1

dir: Storage
        path /mnt/pve/Storage
        content vztmpl,snippets,rootdir,iso,images,backup
        is_mountpoint 1
        nodes pvx

nfs: VM-Storage
        export /mnt/VM-Pool/VM_Storage
        path /mnt/pve/VM-Storage
        server 192.168.50.246
        content images,rootdir
        prune-backups keep-all=1

Code:
root@pvx:/mnt/pbs/Synology# pvesm status

user config - ignore invalid privilege 'VM.Monitor'

Name              Type     Status     Total (KiB)      Used (KiB) Available (KiB)        %
PBS                pbs     active        71017632        67394884               0   94.90%
Pool2          zfspool     active      7352287232       425454899      6926832332    5.79%
Storage            dir     active      1921724676        18460388      1805572228    0.96%
Synology           nfs     active     22490987904      3010936320     19480051584   13.39%
VM-Storage         nfs     active      5299816448       512751616      4787064832    9.67%
local              dir     active        71017632        67394884               0   94.90%
local-lvm      lvmthin     active       148086784               0       148086784    0.00%

Code:
root@pvx:/mnt/pbs/Synology# du -xd1 /mnt
51440708        /mnt/pbs
4       /mnt/pve
51440716        /mnt

Sorry for the super long reply. Tried to get everything that might be a clue for both of you.
 
Your "local" (/var/lib/vz or root disk) points to the same location as PBS based on the size report from pvesm. This is not what you want but is expected in your particular situation.
The data is in /mnt/pbs/Synology. The files are "hidden" because they start with a "dot". It's a Unix thing. You can see the files if you add "-a" option to your "ls".
At this point if you want to free up space, you can "cd /mnt/pbs/Synology; rm -rf *".
Or if you want to be extra careful : rm -rf .chunk

Then check the space again.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Your "local" (/var/lib/vz or root disk) points to the same location as PBS based on the size report from pvesm. This is not what you want but is expected in your particular situation.
The data is in /mnt/pbs/Synology. The files are "hidden" because they start with a "dot". It's a Unix thing. You can see the files if you add "-a" option to your "ls".
At this point if you want to free up space, you can "cd /mnt/pbs/Synology; rm -rf *".
Or if you want to be extra careful : rm -rf .chunk

Then check the space again.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Once I saw the .chunks taking up the space, I figured that would be the solution. I am not the best at linux things, so I really appreciate you confirming my suspicion. I now am down to 16.3 Gb used.

Thanks so much to both of you. I learned a lot trying to fix this.
 
Someone on this forum mentioned if you intend to use a nfs share for backups you should create the directory, make it immutable with chattr +i
and then mount the nfs share to this direcory. If the nfs share is not mounted then no data can be written to this directory, so you can be sure not to fill up your local drive.
 
Someone on this forum mentioned if you intend to use a nfs share for backups you should create the directory, make it immutable with chattr +i
and then mount the nfs share to this direcory. If the nfs share is not mounted then no data can be written to this directory, so you can be sure not to fill up your local drive.
Thank you! I am going to look into doing that