[SOLVED] Local folder/drive filled up during backup

Spaldo

Member
Jan 25, 2021
10
1
8
44
Hi, I have read so many threads before posting and have found many similar topics but couldn't find one exactly on point. I probably missed it though...

So anyway, here goes. Yesterday morning my local drive on one of my nodes (PVE5) filled up during backup. I got an error email as well as can see that the drive was full.

2023-10-29-34.png

I have a folder mounted (SMB/CIFS) share to my unraid storage, which is /mnt/pve/unraid. Although I cannot be sure, I think that for some reason that the folder was not mounted correctly and it has instead written to the local drive/folder and filled it up.

2023-10-29-33.png

However, now that it seems to be mounted I cannot find the files to delete them. This theory may be wrong. Here are some of the outputs to commands that I see commonly requested:

Code:
root@pve5:/# du -ha /var/lib/vz | sort -h
4.0K    /var/lib/vz/dump
4.0K    /var/lib/vz/images
4.0K    /var/lib/vz/private
4.0K    /var/lib/vz/snippets
80M     /var/lib/vz/template/iso/opencore-osx-proxmox-vm.iso
124M    /var/lib/vz/template/cache/ubuntu-22.04-standard_22.04-1_amd64.tar.zst
130M    /var/lib/vz/template/cache/ubuntu-23.04-standard_23.04-1_amd64.tar.zst
198M    /var/lib/vz/template/cache/debian-11-turnkey-core_17.1-1_amd64.tar.gz
451M    /var/lib/vz/template/cache
511M    /var/lib/vz/template/iso/virtio-win-0.1.229.iso
801M    /var/lib/vz/template/iso/recovery-ventura.iso
1.9G    /var/lib/vz/template/iso/ubuntu-22.04.2-live-server-amd64.iso
3.2G    /var/lib/vz/template/iso
3.7G    /var/lib/vz
3.7G    /var/lib/vz/template

Code:
root@pve5:/# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso


lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images


zfspool: SpaldoZFS
        pool SpaldoZFS
        content images,rootdir
        nodes pve5,pve3,proxmox
        sparse 0


dir: ssd-directory
        path /mnt/pve/ssd-directory
        content snippets,rootdir,iso,vztmpl,backup,images
        is_mountpoint 1
        nodes pve5
        prune-backups keep-last=5
        shared 0


zfspool: GreenZFS
        pool GreenZFS
        content images,rootdir
        nodes pve5
        sparse 0

dir: unraid
path /mnt/pve/unraid
content backup,images,vztmpl
prune-backups keep-all=1
shared 0
is_mountpoint 1
mkdir 0

It should be noted that I have since added is_mountpoint 1 & mkdir 0 after reading many threads. But it hasn't fixed the initial problem, maybe it will stop future issues...

Code:
root@pve5:/# df -h
Filesystem                                               Size  Used Avail Use% Mounted on
udev                                                      24G     0   24G   0% /dev
tmpfs                                                    4.7G  1.5M  4.7G   1% /run
/dev/mapper/pve-root                                      68G   54G   11G  84% /
tmpfs                                                     24G   67M   24G   1% /dev/shm
tmpfs                                                    5.0M     0  5.0M   0% /run/lock
/dev/nvme0n1p2                                          1022M  344K 1022M   1% /boot/efi
/dev/sdb1                                                233G  1.7G  232G   1% /mnt/pve/Drive
/dev/sda1                                                239G   49G  190G  21% /mnt/pve/ssd-directory
GreenZFS                                                 492G  128K  492G   1% /GreenZFS
SpaldoZFS                                                2.1T   51G  2.1T   3% /SpaldoZFS
SpaldoZFS/subvol-102-disk-0                              2.0G  1.5G  571M  73% /SpaldoZFS/subvol-102-disk-0
SpaldoZFS/subvol-103-disk-0                              2.0G  475M  1.6G  24% /SpaldoZFS/subvol-103-disk-0
SpaldoZFS/subvol-104-disk-0                              4.0G  1.4G  2.7G  33% /SpaldoZFS/subvol-104-disk-0
SpaldoZFS/subvol-301-disk-0                              4.0G  1.4G  2.7G  34% /SpaldoZFS/subvol-301-disk-0
GreenZFS/subvol-102-disk-0                               2.0G  2.0G   78M  97% /GreenZFS/subvol-102-disk-0
GreenZFS/subvol-103-disk-0                               2.0G  848M  1.2G  42% /GreenZFS/subvol-103-disk-0
GreenZFS/subvol-504-disk-0                               500G  331G  170G  67% /GreenZFS/subvol-504-disk-0
GreenZFS/subvol-503-disk-0                               400G  185G  216G  47% /GreenZFS/subvol-503-disk-0
GreenZFS/subvol-104-disk-0                               4.0G  2.1G  2.0G  52% /GreenZFS/subvol-104-disk-0
SpaldoZFS/subvol-504-disk-0                              4.9T  3.2T  1.8T  65% /SpaldoZFS/subvol-504-disk-0
SpaldoZFS/subvol-504-disk-1                              1.6T  173G  1.4T  11% /SpaldoZFS/subvol-504-disk-1
SpaldoZFS/subvol-506-disk-0                              6.0G  1.8G  4.3G  29% /SpaldoZFS/subvol-506-disk-0
SpaldoZFS/subvol-506-disk-1                              200G  128K  200G   1% /SpaldoZFS/subvol-506-disk-1
/dev/fuse                                                128M   68K  128M   1% /etc/pve
//192.168.1.100/Movies-HD/                                33T   28T  5.2T  85% /mnt/lxc_shares/unraid_rwx/Movies-HD
tmpfs                                                    4.7G     0  4.7G   0% /run/user/0

Code:
root@pve5:/# lsblk -f
NAME                         FSTYPE      FSVER    LABEL          UUID                                   FSAVAIL FSUSE% MOUNTPOINT
sda
└─sda1                       xfs                                 8227b563-971d-432d-803f-25b7d7e2903a    189.7G    20% /mnt/pve/ssd-directory
sdb
└─sdb1                       xfs                                 e063081c-d117-4d6c-ba5d-c14d7a00acd9    231.1G     1% /mnt/pve/Drive
sdc
├─sdc1                       zfs_member  5000     SpaldoZFS      2953576826524904549
└─sdc9
sdd
├─sdd1                       zfs_member  5000     GreenZFS       18318582641099861841
└─sdd9
sde
├─sde1                       zfs_member  5000     GreenZFS       18318582641099861841
└─sde9
sdf
├─sdf1                       zfs_member  5000     SpaldoZFS      2953576826524904549
└─sdf9
zd0
├─zd0p1                      vfat        FAT16    hassos-boot    3A19-0747
├─zd0p2                      squashfs    4.0
├─zd0p3                      squashfs    4.0
├─zd0p4                      squashfs    4.0
├─zd0p5                      squashfs    4.0
├─zd0p6
├─zd0p7                      ext4        1.0      hassos-overlay 3a98dad4-3fb7-4e72-8f08-531816245d5b
└─zd0p8                      ext4        1.0      hassos-data    164d31c5-a690-4460-92b0-7cd33fa651b3
zd16
zd32
├─zd32p1                     vfat        FAT16    hassos-boot    3A19-0747
├─zd32p2                     squashfs    4.0
├─zd32p3                     squashfs    4.0
├─zd32p4                     squashfs    4.0
├─zd32p5                     squashfs    4.0
├─zd32p6
├─zd32p7                     ext4        1.0      hassos-overlay 3a98dad4-3fb7-4e72-8f08-531816245d5b
└─zd32p8                     ext4        1.0      hassos-data    164d31c5-a690-4460-92b0-7cd33fa651b3
zd48
zd64
├─zd64p1
└─zd64p2                     ntfs                 STORE          A868CDBB68CD8890
zd80
zd96
zd112
├─zd112p1
└─zd112p2                    ntfs                 SpaldoZFS PVE5 067E5FC37E5FAA67
zd128
nvme0n1
├─nvme0n1p1
├─nvme0n1p2                  vfat        FAT32                   1B31-1279                              1021.6M     0% /boot/efi
└─nvme0n1p3                  LVM2_member LVM2 001                gO6p1Q-OYda-M0W8-uyl5-j4V8-f87y-p6DrLp
  ├─pve-swap                 swap        1                       f3697bcc-79b8-48c6-be99-0c0062258313                  [SWAP]
  ├─pve-root                 ext4        1.0                     a05562da-71f9-4d21-aa8a-0d4e66ea64c6     10.4G    80% /
  ├─pve-data_tmeta
  │ └─pve-data-tpool
  │   ├─pve-data
  │   ├─pve-vm--500--disk--0
  │   ├─pve-vm--500--disk--1
  │   ├─pve-vm--501--disk--0
  │   ├─pve-vm--504--disk--0 ext4        1.0                     8ebd3c84-cf89-4f77-805a-809e920ffde2
  │   └─pve-vm--505--disk--0 ext4        1.0                     7cfbb612-70cf-43a9-854b-e8e21b937c9d
  └─pve-data_tdata
    └─pve-data-tpool
      ├─pve-data
      ├─pve-vm--500--disk--0
      ├─pve-vm--500--disk--1
      ├─pve-vm--501--disk--0
      ├─pve-vm--504--disk--0 ext4        1.0                     8ebd3c84-cf89-4f77-805a-809e920ffde2
      └─pve-vm--505--disk--0 ext4        1.0                     7cfbb612-70cf-43a9-854b-e8e21b937c9d

Hopefully this makes sense and someone can help :)
 
Last edited:
I have a folder mounted (SMB/CIFS) share to my unraid storage, which is /mnt/pve/unraid. Although I cannot be sure, I think that for some reason that the folder was not mounted correctly and it has instead written to the local drive/folder and filled it up.
Yes, thats a common problem when using a "Directory" storage but not setting the "is_mountpoint" Option via CLI first. With that "is_mountpoint" and a failed mount the storage would just stop working and the backups would fail instead of filling up your root filesystem.
However, now that it seems to be mounted I cannot find the files to delete them. This theory may be wrong. Here are some of the outputs to commands that I see commonly requested:
Unmount that SMB share. You can't see the failed backups while something is mounted in that mountpoint.
 
  • Like
Reactions: Spaldo

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!