Understand LXC container's storage size

grobs

Active Member
Apr 1, 2016
56
0
26
37
France
Hi,

I have an LXC container that has a big storage defined in the web UI ("900G") but the filesystem seems not to have taken the last resizes commands into account.

Here is the current state of the storage:

On the container:
Code:
#df -h
Filesystem                    Size  Used Avail Use% Mounted on
rpool/data/subvol-183-disk-0   806G    643G  163G  80% /
none                           492K    4,0K  488K   1% /dev
tmpfs                           63G    200K   63G   1% /dev/shm
tmpfs                           63G    8,2M   63G   1% /run
tmpfs                          5,0M       0  5,0M   0% /run/lock
tmpfs                           63G       0   63G   0% /sys/fs/cgroup

On the Proxmox host:
Code:
# zfs list rpool/data/subvol-183-disk-0
NAME                           USED  AVAIL     REFER  MOUNTPOINT
rpool/data/subvol-183-disk-0   643G   163G      643G  /rpool/data/subvol-183-disk-0

# cat /etc/pve/local/lxc/183.conf
#production
arch: amd64
cpulimit: 0
cpuunits: 1000
hostname: xxx
memory: 24576
net0: name=eth0,bridge=vmbr2,gw=x.x.x.x,hwaddr=6A:75:2E:5D:15:79,ip=x.x.x.x/26,type=veth
onboot: 1
ostype: debian
rootfs: local-zfs:subvol-183-disk-0,size=900G
swap: 4096

# pveversion
pve-manager/6.3-6/2184247e (running kernel: 5.4.44-1-pve)

Could you please explain me that size difference (defined vs mesured)?

Regards
 
Last edited:
hi,
but the filesystem seems not to have taken the last resizes commands into account.
which command did you run to resize the disk?
 
I only have done it by the web UI (Resources > Resize disk).
I didn't obtain any error message.

I tried to reboot the container but notheing changed.

Other poentially interesting info: the container had a snapshot. Removing it made the container 100G larger (don't understand why).

Here is the config file before the snapshot removal:

Code:
# cat /etc/pve/local/lxc/183.conf
#production
arch: amd64
cpulimit: 0
cpuunits: 1000
hostname: xxx
memory: 24576
net0: name=eth0,bridge=vmbr2,gw=x.x.x.x,hwaddr=6A:75:2E:5D:15:79,ip=x.x.x.x/26,type=veth
onboot: 1
ostype: debian
parent: avant_reboot
rootfs: local-zfs:subvol-183-disk-0,size=900G
swap: 4096


[a_snapshot]
#
arch: amd64
cpulimit: 0
cpuunits: 1000
hostname: xxx
memory: 24576
net0: name=eth0,bridge=vmbr2,gw=x.x.x.x,hwaddr=6A:75:2E:5D:15:79,ip=x.x.x.x/26,type=veth
onboot: 1
ostype: debian
rootfs: local-zfs:subvol-183-disk-0,size=800G
snaptime: 1608128109
swap: 4096
 
Last edited:
Removing it made the container 100G larger (don't understand why).
do you see the correct size inside the container df -h after the snapshot removal?

what is the output from zfs list | grep CTID (replace CTID with yours)
 
do you see the correct size inside the container df -h after the snapshot removal?
No, I see 806G instead of 900G.

Before snapshot removal (I had done those commands and noted the results):
Code:
ON PROXMOX HOST :
# zfs list
NAME                           USED  AVAIL     REFER  MOUNTPOINT
rpool                          782G  62.9G      104K  /rpool
rpool/ROOT                    7.15G  62.9G       96K  /rpool/ROOT
rpool/ROOT/pve-1              7.15G  62.9G     7.15G  /
rpool/data                     775G  62.9G      144K  /rpool/data
rpool/data/subvol-108-disk-0  2.47G  3.53G     2.47G  /rpool/data/subvol-108-disk-0
rpool/data/subvol-123-disk-0  2.33G  2.67G     2.33G  /rpool/data/subvol-123-disk-0
rpool/data/subvol-125-disk-0  19.0G  19.0G     19.0G  /rpool/data/subvol-125-disk-0
rpool/data/subvol-167-disk-0  1.78G  3.22G     1.78G  /rpool/data/subvol-167-disk-0
rpool/data/subvol-183-disk-0   743G  62.9G      643G  /rpool/data/subvol-183-disk-0
rpool/data/subvol-184-disk-0  2.92G  4.08G     2.92G  /rpool/data/subvol-184-disk-0
rpool/data/subvol-197-disk-0  1.82G  9.18G     1.82G  /rpool/data/subvol-197-disk-0
rpool/data/subvol-199-disk-0  1.62G  3.38G     1.62G  /rpool/data/subvol-199-disk-0

ON CONTAINER :
# df -h
Filesystem                    Size  Used Avail Use% Mounted on
rpool/data/subvol-183-disk-0   706G    643G   63G  92% /
none                           492K    4,0K  488K   1% /dev
tmpfs                           63G    160K   63G   1% /dev/shm
tmpfs                           63G    8,2M   63G   1% /run
tmpfs                          5,0M       0  5,0M   0% /run/lock
tmpfs                           63G       0   63G   0% /sys/fs/cgroup

After snapshot removal:
Code:
ON PROXMOX HOST :
# zfs list
NAME                           USED  AVAIL     REFER  MOUNTPOINT
rpool                          682G   163G      104K  /rpool
rpool/ROOT                    7.15G   163G       96K  /rpool/ROOT
rpool/ROOT/pve-1              7.15G   163G     7.15G  /
rpool/data                     675G   163G      144K  /rpool/data
rpool/data/subvol-108-disk-0  2.47G  3.53G     2.47G  /rpool/data/subvol-108-disk-0
rpool/data/subvol-123-disk-0  2.30G  2.70G     2.30G  /rpool/data/subvol-123-disk-0
rpool/data/subvol-125-disk-0  19.0G  19.0G     19.0G  /rpool/data/subvol-125-disk-0
rpool/data/subvol-167-disk-0  1.78G  3.22G     1.78G  /rpool/data/subvol-167-disk-0
rpool/data/subvol-183-disk-0   643G   163G      643G  /rpool/data/subvol-183-disk-0
rpool/data/subvol-184-disk-0  2.92G  4.08G     2.92G  /rpool/data/subvol-184-disk-0
rpool/data/subvol-197-disk-0  1.82G  9.18G     1.82G  /rpool/data/subvol-197-disk-0
rpool/data/subvol-199-disk-0  1.62G  3.38G     1.62G  /rpool/data/subvol-199-disk-0

ON CONTAINER :
# df -h
Filesystem                    Size  Used Avail Use% Mounted on
rpool/data/subvol-183-disk-0   806G    643G  163G  80% /
none                           492K    4,0K  488K   1% /dev
tmpfs                           63G    200K   63G   1% /dev/shm
tmpfs                           63G    8,2M   63G   1% /run
tmpfs                          5,0M       0  5,0M   0% /run/lock
tmpfs                           63G       0   63G   0% /sys/fs/cgroup
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!