Wrong free space displayed in LXC

yena

Renowned Member
Nov 18, 2011
380
5
83
Hello,
i have i single server with one LXC container.
This istance, display a strange output using df

root@p: ~ $ df
Filesystem 1K-blocks Used Available Use% Mounted on
storage/subvol-100-disk-1 25683991936 25506562688 177429248 100% /
none 492 12 480 3% /dev
tmpfs 32965636 4 32965632 1% /dev/shm
tmpfs 32965636 8288 32957348 1% /run
tmpfs 5120 0 5120 0% /run/lock
tmpfs 32965636 0 32965636 0% /sys/fs/cgroup

root@pica-cdn: ~ $ df -h
Filesystem Size Used Avail Use% Mounted on
storage/subvol-100-disk-1 24T 24T 170G 100% /
none 492K 12K 480K 3% /dev
tmpfs 32G 4.0K 32G 1% /dev/shm
tmpfs 32G 8.1M 32G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup

-------------------------------------------------------------------------------------------------------------

On the Host server:
zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 186G 2.10G 184G - 8% 1% 1.00x ONLINE -
storage 27.2T 26.2T 1.01T - 64% 96% 1.00x ONLINE -

-------------------------------------------------------------------------------------------------------------

If i add for example 300G to my vps, still remain at 170G free...


pveversion -V
proxmox-ve: 5.1-43 (running kernel: 4.15.17-1-pve)
pve-manager: 5.1-52 (running version: 5.1-52/ba597a64)
pve-kernel-4.15: 5.1-4
pve-kernel-4.15.17-1-pve: 4.15.17-8
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-30
libpve-guest-common-perl: 2.0-15
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-20
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-2
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-15
pve-cluster: 5.0-26
pve-container: 2.0-22
pve-docs: 5.1-17
pve-firewall: 3.0-8
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-3
qemu-server: 5.0-25
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.8-pve1~bpo9

-------------------------------------------------------------

Thanks!!
 
hi.

If i add for example 300G to my vps, still remain at 170G free...

how do you do that exactly? how do you mount the new disk (or do you repartition?)? did you make a filesystem on it?
 
hi.

how do you do that exactly? how do you mount the new disk (or do you repartition?)? did you make a filesystem on it?

No partition o FS, it's a simple VPS LXC, and the host is ZFS.
 
but how do you add it to your container? through gui?
can we see a config file maybe? (`pct config CTID`)
 
but how do you add it to your container? through gui?
can we see a config file maybe? (`pct config CTID`)

cat 100.conf
arch: amd64
cores: 24
hostname: p-cdn
memory: 24576
net0: name=eth0,bridge=vmbr0,gw=185.36.72.1,hwaddr=AA:31:56:3D:2A:7B,ip=185......../22,type=veth
onboot: 0
ostype: debian
parent: pre_update
rootfs: storage:subvol-100-disk-1,size=26300G
swap: 0
 
again, how do you add the 300G space to your container?

if you just edited the size on the config file, then it won't work.

you can use the gui:

you need to either add a new disk

container -> resources -> add -> mount point

or resize the current one.

container -> resources -> root disk -> resize disk
 
again, how do you add the 300G space to your container?

if you just edited the size on the config file, then it won't work.

you can use the gui:

you need to either add a new disk

container -> resources -> add -> mount point

or resize the current one.

container -> resources -> root disk -> resize disk


I have used GUI.
Now, deleting sole snapshot i can see the 300G added .. so i think all the space is used by snapshots..
 
Now, deleting sole snapshot i can see the 300G added .. so i think all the space is used by snapshots..
i'm glad the problem is gone. you can mark this thread [SOLVED] so others knows what to expect
 
I would like to report weird bahavior of free/used space in LXC too.
my config:
arch: amd64
cores: 2
description: revize 17.12.2024
features: fuse=1,nesting=1
hostname: myene99
memory: 2048
mp1: zfs-mirror:subvol-5099-disk-0,mp=/dsk1,backup=1,size=80G
mp2: /dsk0/mybackup99,mp=/dsk0/mybackup99
mp3: /backup4/mybackup99,mp=/backup/mybackup99
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.2.1,hwaddr=BC:24:11:0C:58:7A,ip=192.168.2.99/24,tag=20,type=veth
onboot: 1
ostype: centos
rootfs: zfs-mirror:subvol-5099-disk-1,size=8G
swap: 4096
unprivileged: 1

hdd usage on proxmox:
zfs-mirror/subvol-5099-disk-0 80G 3.6G 77G 5% /zfs-mirror/subvol-5099-disk-0
zfs-mirror/subvol-5099-disk-1 8.0G 3.9G 4.2G 48% /zfs-mirror/subvol-5099-disk-1

which is good.

hdd usage inside LXC:
zfs-mirror/subvol-5099-disk-1 8.0G 3.9G 4.2G 48% /
zfs-mirror/subvol-5099-disk-0 80G 3.6G 77G 5% /dsk1

which is OK too.

BUT I have 56GB file on /dsk1:
-rw-r----- 1 1002 1001 57G Dec 18 11:38 store2nd.dat

mc shows usage of /dsk1 nearly 75GB, so there sould be 5GB free, not 77G.