HD Full

jensie

Member
Dec 15, 2020
42
3
13
48
Dear all,

Please provide some help because I'm unable to understand what's going on.
I have a cluster of 3 servers, part in HA. All 3 have a zfs-pool on which I'm running PVE 8.2.2
On pve2 and pve3 - the disk space is approx 10G
however for some reason on pve1, I'm getting at 182G

Running du -h --max-depth=1 /
I'm getting ;

Code:
512     /media
1.3M    /run
3.9M    /etc
du: cannot access '/proc/44143/task/44143/fd/3': No such file or directory
du: cannot access '/proc/44143/task/44143/fdinfo/3': No such file or directory
du: cannot access '/proc/44143/fd/4': No such file or directory
du: cannot access '/proc/44143/fdinfo/4': No such file or directory
0       /proc
5.3G    /HA-Storage
53K     /root
2.7G    /usr
481M    /boot
512     /opt
512     /srv
17G     /mnt
30K     /tmp
2.0K    /rpool
0       /sys
5.0K    /home
66M     /dev
2.5G    /var
28G     /

running df -h

Code:
Filesystem                     Size  Used Avail Use% Mounted on
udev                            63G     0   63G   0% /dev
tmpfs                           13G  1.3M   13G   1% /run
rpool/ROOT/pve-1               227G  187G   41G  83% /
tmpfs                           63G   66M   63G   1% /dev/shm
tmpfs                          5.0M     0  5.0M   0% /run/lock
efivarfs                       192K   59K  129K  32% /sys/firmware/efi/efivars
/dev/nvme2n1p1                 1.8T   17G  1.7T   1% /mnt/pve/productionPool
/dev/sdd                       7.3T   47M  6.9T   1% /mnt/media
rpool                           41G  128K   41G   1% /rpool
HA-Storage                     1.7T  128K  1.7T   1% /HA-Storage
rpool/data                      41G  128K   41G   1% /rpool/data
rpool/ROOT                      41G  128K   41G   1% /rpool/ROOT
HA-Storage/subvol-1000-disk-0   32G  4.2G   28G  13% /HA-Storage/subvol-1000-disk-0
HA-Storage/subvol-4500-disk-1  2.0G  566M  1.5G  28% /HA-Storage/subvol-4500-disk-1
HA-Storage/subvol-4500-disk-0  2.0G  561M  1.5G  28% /HA-Storage/subvol-4500-disk-0
/dev/fuse                      128M   44K  128M   1% /etc/pve
tmpfs                           13G     0   13G   0% /run/user/0

rpool/ROOT/pve-1 227G 187G 41G 83% /

It's quite unclear to me why this is so high as only 1 VM is running on this with 32G
 
Last edited:
Thank you for your reply. Here are the results

zpool list

Code:
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
HA-Storage  1.81T  52.0G  1.76T        -         -     8%     2%  1.00x    ONLINE  -
rpool        236G   188G  47.5G        -         -    40%    79%  1.00x    ONLINE  -

zfs list

Code:
NAME                            USED  AVAIL  REFER  MOUNTPOINT
HA-Storage                      117G  1.64T   112K  /HA-Storage
HA-Storage/subvol-1000-disk-0  4.14G  27.9G  4.13G  /HA-Storage/subvol-1000-disk-0
HA-Storage/subvol-4500-disk-0   561M  1.45G   561M  /HA-Storage/subvol-4500-disk-0
HA-Storage/subvol-4500-disk-1   566M  1.45G   565M  /HA-Storage/subvol-4500-disk-1
HA-Storage/vm-2000-disk-0       111G  1.71T  44.6G  -
HA-Storage/vm-2000-disk-1      3.05M  1.64T    56K  -
rpool                           188G  40.2G   104K  /rpool
rpool/ROOT                      186G  40.2G    96K  /rpool/ROOT
rpool/ROOT/pve-1                186G  40.2G   186G  /
rpool/data                     2.22G  40.2G    96K  /rpool/data
rpool/data/vm-9000-disk-0       955M  40.2G   955M  -
rpool/data/vm-9000-disk-1      1.28G  40.2G  1.28G  -
 
here you go

Code:
proxmox-backup-client failed: Error: unable to open chunk store 'Synology3' at "/mnt/synology/.chunks" - No such file or directory (os error 2)
Name                  Type     Status           Total            Used       Available        %
HA-Storage         zfspool     active      1885863936       120668900      1765195036    6.40%
PBS2                   pbs     active      2097152000        94081664      2003070336    4.49%
PBS3                   pbs   inactive               0               0               0    0.00%
devPool                dir   disabled               0               0               0      N/A
local                  dir     active       237412352       195223552        42188800   82.23%
local-zfs          zfspool     active        44513236         2324400        42188836    5.22%
productionPool         dir     active      1921724676        17631320      1806401296    0.92%
 
Were you connecting to your Synology with NFS?

I suspect that this is the issue. I have seen cases where a connection to an NFS share is lost, and the data is written locally.

If you remove the NFS share and re-add it, you may get a message indicating that you cannot because the directory has contents. If this is the case, you might find the hidden bits in the folder where you mounted the NFS share.
 
uhh... there is no nfs store defined in pvesm. is /mnt/synology mounted by other means? (eg, fstab?) working this way is pretty fraught, as the pve storage manager is not aware of any issues underneath. this becomes pretty obvious when you look at the du output you provided:


when synology was not mounted, your backup proceeded anyway as it had no idea the storage was unavailable. your backups are now in /mnt/synology on your root patition.
 
No, problem to use NFS via fstab. You just need to set the "is_mountpoint" option via pvesm for the directory storage that points to the NFS shares mountpoint. That way PVE will check if the NFS share is mounted and if not the tasks will fail instead of filling up your root filesystem.
 
  • Like
Reactions: weehooey-bh
Effectively, I used fstab in the following way

Code:
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0


UUID=31f52e69-408f-4a50-83fb-037f5e4eccdb /mnt/media ext4 defaults,noatime,nofail 0 2

I went and looked for the /mnt/synology which I don't find
 
Last edited:
Effectively, I used fstab in the following way

Code:
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0


UUID=31f52e69-408f-4a50-83fb-037f5e4eccdb /mnt/media ext4 defaults,noatime,nofail 0 2

I went and looked for the /mnt/synology which I don't find
Please post the following items:
  • The contents of /etc/pve/storage.cfg
  • The output of find /mnt -maxdepth 3 -type d -ls
Where were you looking for /mnt/synology? Were you looking in /etc/fstab, /mnt, or somewhere else?
 
Thank you for your responses. Here are the requested items :

  • The contents of /etc/pve/storage.cfg

Code:
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup


zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        sparse 1


pbs: PBS2
        datastore Synology3
        server 192.168.2.22
        content backup
        fingerprint XYZ
        prune-backups keep-all=1
        username root@pam


zfspool: HA-Storage
        pool HA-Storage
        content images,rootdir
        mountpoint /HA-Storage
        nodes pve3,pve2,pve1
        sparse 0


dir: productionPool
        path /mnt/pve/productionPool
        content iso,vztmpl,rootdir,images,snippets,backup
        is_mountpoint 1
        nodes pve1


dir: devPool
        path /mnt/pve/devPool
        content vztmpl,iso,rootdir,images,snippets,backup
        is_mountpoint 1
        nodes pve2

  • The output of find /mnt -maxdepth 3 -type d -ls

Code:
    8      1 drwxr-xr-x   5 root     root            5 Dec  9 21:08 /mnt
   383474      1 drwxr-xr-x   3 root     root            3 Apr  6 22:15 /mnt/media
   242956      1 drwxr-xr-x   5 root     root            5 Apr  6 22:15 /mnt/media/frigate
   240839   4137 drwxr-xr-x   2 root     root        24214 Apr 10 15:56 /mnt/media/frigate/clips
   240838      1 drwxr-xr-x   3 root     root            3 Apr 10 09:16 /mnt/media/frigate/recordings
   240840      1 drwxr-xr-x   2 root     root            2 Apr  6 22:15 /mnt/media/frigate/exports
   374562      1 drwxr-xr-x   5 root     root            5 Dec  9 09:38 /mnt/pve
   373978      1 drwxr-xr-x   2 root     root            2 Dec  9 09:36 /mnt/pve/nmvePool
        2      4 drwxr-xr-x   8 root     root         4096 Dec  9 09:38 /mnt/pve/productionPool
 63438849      4 drwxr-xr-x   2 root     root         4096 Dec  9 09:38 /mnt/pve/productionPool/snippets
 95682561      4 drwxr-xr-x   3 root     root         4096 Dec 16 18:54 /mnt/pve/productionPool/images
       11     16 drwx------   2 root     root        16384 Dec  9 09:38 /mnt/pve/productionPool/lost+found
 72613889      4 drwxr-xr-x   2 root     root         4096 Dec  9 09:38 /mnt/pve/productionPool/private
 89128961      4 drwxr-xr-x   4 root     root         4096 Dec  9 09:38 /mnt/pve/productionPool/template
 73138177      4 drwxr-xr-x   2 root     root         4096 Dec  9 09:38 /mnt/pve/productionPool/dump
   374563      1 drwxr-xr-x   2 root     root            2 Dec  9 09:33 /mnt/pve/NMVEPOOL
   228211      1 drwxr-xr-x   2 root     root            3 Nov 30 16:26 /mnt/backups

I was looking in /mnt which in the end resulted that the /mnt/media was basically the cause of all my problems.

Thank you so much,
Jens
 
  • Like
Reactions: weehooey-bh
Hi Jens

Thanks for posting the last bit of information.

I was looking in /mnt which in the end resulted that the /mnt/media was basically the cause of all my problems.

So, you are all fixed up now? Or, at least know what is taking up your space on that node?
 
Were you connecting to your Synology with NFS?

I suspect that this is the issue. I have seen cases where a connection to an NFS share is lost, and the data is written locally.

If you remove the NFS share and re-add it, you may get a message indicating that you cannot because the directory has contents. If this is the case, you might find the hidden bits in the folder where you mounted the NFS share.
Hello there. i had the same problem as the other guy and i erased the bits while the CIFS was umounted

BUT

when i issue the command i got this error:

Code:
root@proxmox:~# pvesm set frigate --is_mountpoint yes
update storage failed: unexpected property 'is_mountpoint'

i tried with mounted and unmounted storage and rebooting the PVE. no dice
 
update storage failed: unexpected property 'is_mountpoint'
This seems to work only for "directory"-storages.

You have configured it as an "SMB/CIFS"-storage? Then PVE knows this. (What gives pvesm status?)

To use "is_mountpoint" you would mount it the classic way via fstab and tell PVE to use a "directory"-storage...


Disclaimer: not tested
 
This seems to work only for "directory"-storages.

You have configured it as an "SMB/CIFS"-storage? Then PVE knows this. (What gives pvesm status?)

To use "is_mountpoint" you would mount it the classic way via fstab and tell PVE to use a "directory"-storage...


Disclaimer: not tested
Thanks for the reply. It's indeed an CIF/SAMBA share.
here's the status and config:

Code:
root@proxmox:~# pvesm status
Name                Type     Status           Total            Used       Available        %
foto                 nfs     active      2411738112        84059136      2327678976    3.49%
frigate             cifs     active      2671386752       343708032      2327678720   12.87%
local                dir     active        71613560        17080392        50849636   23.85%
local-lvm        lvmthin     active       149860352        76758472        73101879   51.22%
pve-Backups          nfs     active      2351707136        24028160      2327678976    1.02%
sql                  nfs     active      2327680000            1024      2327678976    0.00%
torrefazione        cifs     active      3260734848       933056128      2327678720   28.61%
root@proxmox:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

cifs: frigate
        path /mnt/pve/frigate
        server 192.168.2.90
        share frigate
        content iso,images
        preallocation off
        prune-backups keep-all=1
        username sugar0

cifs: torrefazione
        path /mnt/pve/torrefazione
        server 192.168.2.90
        share torrefazione
        content images
        preallocation off
        prune-backups keep-all=1
        username sugar0

nfs: sql
        export /mnt/piscina/sql
        path /mnt/pve/sql
        server 192.168.2.90
        content iso
        prune-backups keep-all=1

nfs: pve-Backups
        export /mnt/piscina/PVE-backups
        path /mnt/pve/pve-Backups
        server 192.168.2.90
        content backup
        prune-backups keep-all=1

nfs: foto
        export /mnt/piscina/foto
        path /mnt/pve/foto
        server 192.168.2.90
        content iso
        prune-backups keep-all=1

do i have to define the mountpoint in fstab instead of using the storage.cfg? did i understand correctly?

There's no problem for me to change it to a NFS share if that's the issue
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!