Odd storage reporting on a container

etrigan63

New Member
Jun 23, 2025
3
0
1
Hi all. I am running Proxmox 9.0 CE and hosting about 15 containers. All is fine except for one. It is a custom container running Ubuntu 24.04 LTS + Nextcloud snap in unprivileged mode. The Nextcloud data is stored on an NFS share mounted on the Proxmox host and then bound to the container. It all works as it should, except that the container drive is reporting the space used on the NFS shares mounted to /media as part of the container disk volume which has swelled to 512GB and is running at 43% usage. How is this normal? What am I doing wrong? How can I reduce the container disk volume without setting off every space utilization alert in the system?

I have tested the volume mountings and files are appearing on the NAS and not stuffed into some phantom volume in the container.

Here's the output of lsblk and df -h:

root@nextcloud:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme1n1 259:0 0 931.5G 0 disk
|-nvme1n1p1 259:1 0 1007K 0 part
|-nvme1n1p2 259:2 0 1G 0 part
`-nvme1n1p3 259:3 0 930.5G 0 part
nvme0n1 259:4 0 3.6T 0 disk
|-nvme0n1p1 259:5 0 3.6T 0 part
`-nvme0n1p9 259:6 0 8M 0 part
root@nextcloud:~# df -h
Filesystem Size Used Avail Use% Mounted on
vmpool/subvol-111-disk-0 512G 219G 294G 43% /
192.168.1.95:/mnt/colddata/nextcloud 13T 185G 13T 2% /media/nextcloud
192.168.1.95:/mnt/colddata/ncbackups 13T 137M 13T 1% /media/ncbackups
none 492K 4.0K 488K 1% /dev
efivarfs 128K 61K 63K 50% /sys/firmware/efi/efivars
tmpfs 31G 0 31G 0% /dev/shm
tmpfs 13G 132K 13G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
snapfuse 50M 50M 0 100% /snap/snapd/24792
snapfuse 56M 56M 0 100% /snap/core18/2934
snapfuse 334M 334M 0 100% /snap/nextcloud/49338
snapfuse 336M 336M 0 100% /snap/nextcloud/49897
snapfuse 51M 51M 0 100% /snap/snapd/25202
snapfuse 56M 56M 0 100% /snap/core18/2940
root@nextcloud:~#
 
Last edited: