Hi all. I am running Proxmox 9.0 CE and hosting about 15 containers. All is fine except for one. It is a custom container running Ubuntu 24.04 LTS + Nextcloud snap in unprivileged mode. The Nextcloud data is stored on an NFS share mounted on the Proxmox host and then bound to the container. It all works as it should, except that the container drive is reporting the space used on the NFS shares mounted to /media as part of the container disk volume which has swelled to 512GB and is running at 43% usage. How is this normal? What am I doing wrong? How can I reduce the container disk volume without setting off every space utilization alert in the system?
I have tested the volume mountings and files are appearing on the NAS and not stuffed into some phantom volume in the container.
Here's the output of
I have tested the volume mountings and files are appearing on the NAS and not stuffed into some phantom volume in the container.
Here's the output of
lsblk and df -h:root@nextcloud:~# lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTSnvme1n1 259:0 0 931.5G 0 disk|-nvme1n1p1 259:1 0 1007K 0 part|-nvme1n1p2 259:2 0 1G 0 part`-nvme1n1p3 259:3 0 930.5G 0 partnvme0n1 259:4 0 3.6T 0 disk|-nvme0n1p1 259:5 0 3.6T 0 part`-nvme0n1p9 259:6 0 8M 0 partroot@nextcloud:~# df -hFilesystem Size Used Avail Use% Mounted onvmpool/subvol-111-disk-0 512G 219G 294G 43% /192.168.1.95:/mnt/colddata/nextcloud 13T 185G 13T 2% /media/nextcloud192.168.1.95:/mnt/colddata/ncbackups 13T 137M 13T 1% /media/ncbackupsnone 492K 4.0K 488K 1% /devefivarfs 128K 61K 63K 50% /sys/firmware/efi/efivarstmpfs 31G 0 31G 0% /dev/shmtmpfs 13G 132K 13G 1% /runtmpfs 5.0M 0 5.0M 0% /run/locksnapfuse 50M 50M 0 100% /snap/snapd/24792snapfuse 56M 56M 0 100% /snap/core18/2934snapfuse 334M 334M 0 100% /snap/nextcloud/49338snapfuse 336M 336M 0 100% /snap/nextcloud/49897snapfuse 51M 51M 0 100% /snap/snapd/25202snapfuse 56M 56M 0 100% /snap/core18/2940root@nextcloud:~#
Last edited: