Root disk space shows full due to mounted disk

scurrier

Active Member
Sep 9, 2017
14
2
43
72
Hello,

I mounted a second disk under /mnt/myseconddisk and added it via the web gui as storage for backups. The second disk is a big one and no where near becoming full. The root storage of the node is also not full, but when backing up a container to the second disk, I noticed that the summary page for the node showed that the node's root storage was becoming full, as if I was backing up directly to it:

HD space(root) 94.94% (89.24 GiB of 93.99 GiB)

Did I mount the second disk in the wrong place or something? Is this going to screw up my node? How do I get proxmox to properly reflect the true root hard drive space so I can see how close I am to danger and undefined behavior of having a full disk?
 
Thanks for your help.

Below is the output of the commands requested.

To make the original post easier to understand, I had changed the name of the path to the second disk mount location. The actual path is /mnt/seagate2tb (referred to as /mnt/myseconddisk in OP).

mount
Code:
root@pve:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=8145712k,nr_inodes=2036428,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=1632408k,mode=755)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=39,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=651)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
//thumper/media on /mnt/bindmounts/plexhost/media type cifs (rw,relatime,vers=3.0,sec=ntlmssp,cache=strict,username=plex,domain=,uid=0,noforceuid,gid=0,noforcegid,addr=10.0.200.11,file_mode=0755,dir_mode=0755,nounix,serverino,mapposix,noperm,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=1632408k,mode=700)

df -h
Code:
root@pve:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                  7.8G     0  7.8G   0% /dev
tmpfs                 1.6G  162M  1.4G  11% /run
/dev/mapper/pve-root   94G   90G     0 100% /
tmpfs                 7.8G   37M  7.8G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                 7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/fuse              30M   16K   30M   1% /etc/pve
//thumper/media       6.1T  5.7T  399G  94% /mnt/bindmounts/plexhost/media
tmpfs                 1.6G     0  1.6G   0% /run/user/0

cat /etc/fstab
Code:
root@pve:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
//thumper/media /mnt/bindmounts/plexhost/media cifs rw,credentials=/home/.smbcredentials-plexhost,vers=3.0,noperm,auto 0 0

du -hsx /*
Code:
root@pve:~# du -hsx /*
13M     /bin
53M     /boot
0       /dev
5.4M    /etc
8.0K    /home
373M    /lib
4.0K    /lib64
16K     /lost+found
4.0K    /media
30G     /mnt
4.0K    /opt
du: cannot access '/proc/28816/task/28816/fd/3': No such file or directory
du: cannot access '/proc/28816/task/28816/fdinfo/3': No such file or directory
du: cannot access '/proc/28816/fd/3': No such file or directory
du: cannot access '/proc/28816/fdinfo/3': No such file or directory
0       /proc
64K     /root
162M    /run
13M     /sbin
4.0K    /srv
0       /sys
32K     /tmp
762M    /usr
59G     /var
 
Darn it. Just came back here to say this. I went to migrate the disk with the backups that I thought were on it to a new machine and found they weren't there. I went back to the old machine and saw that the backup was still there! Boy, am I embarrassed. I wish I had seen this message when you posted it, it would have saved me some major aggravation. I must not have saved the fstab file when I edited it.

Thanks for your help! I'll pay you hush money to not tell anyone about this. :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!