LXC not resizing (Increase)

Nail2540

New Member
Jul 13, 2023
4
0
1
Hello all

I've attempted to increase the size of a linux container running Alpine linux, a process which has succeeded before now, but even though the resizing (which I'm doing through the GUI) works, when I start the container, the bootdisk doesn't show the expanded size. Pictures of what I mean are attached.
How do I proceed?

Thank you
 

Attachments

  • Screenshot_20230713_005028.png
    Screenshot_20230713_005028.png
    45.3 KB · Views: 9
  • Screenshot_20230713_004901.png
    Screenshot_20230713_004901.png
    48.5 KB · Views: 9
  • Screenshot_20230713_004837.png
    Screenshot_20230713_004837.png
    57.8 KB · Views: 9
Hi,
what is the output of df -h from within the container? What filesystem is in use?
 
Code:
media:~# df -h
Filesystem                Size      Used Available Use% Mounted on
rpool/data/subvol-101-disk-0
                        820.5G    779.5G     41.0G  95% /
none                    492.0K      4.0K    488.0K   1% /dev
run                      15.6G    464.0K     15.6G   0% /run
shm                      15.6G         0     15.6G   0% /dev/shm
udev                     15.5G         0     15.5G   0% /dev/dri/card0
udev                     15.5G         0     15.5G   0% /dev/dri/renderD128
udev                     15.5G         0     15.5G   0% /dev/net
udev                     15.5G         0     15.5G   0% /dev/net/tun
udev                     15.5G         0     15.5G   0% /dev/full
udev                     15.5G         0     15.5G   0% /dev/null
udev                     15.5G         0     15.5G   0% /dev/random
udev                     15.5G         0     15.5G   0% /dev/tty
udev                     15.5G         0     15.5G   0% /dev/urandom
udev                     15.5G         0     15.5G   0% /dev/zero
none                    492.0K      4.0K    488.0K   1% /proc/sys/kernel/random/boot_id
overlay                 820.5G    779.5G     41.0G  95% /var/lib/docker/overlay2/c0d6ee883a3e5cb788f0d9fd9e8a8b796d0b0ecd4e694c7ada349f967220d487/merged
overlay                 820.5G    779.5G     41.0G  95% /var/lib/docker/overlay2/eef97fda0ce3784d19df477d7765199a02dac228e87d186541583661fd4753ae/merged
overlay                 820.5G    779.5G     41.0G  95% /var/lib/docker/overlay2/292e81acbe85de5c10fc00c1d809c95b605fe3c66d3a0831a1d52f2a65446f25/merged
overlay                 820.5G    779.5G     41.0G  95% /var/lib/docker/overlay2/a6b8104f95ac27cfb9c86dccc64318d0d7b849792d27be6d31b2936b11bde775/merged
overlay                 820.5G    779.5G     41.0G  95% /var/lib/docker/overlay2/c905b5fff53240445eba9d0aa2359ce9495eedce89d00e9be4570cb7c18c5fc7/merged

The filesystem used for / is ZFS
 
What is the output of zfs get all rpool/data/subvol-101-disk-0 on the host?
 
Code:
root@lyc:~# zfs get all rpool/data/subvol-101-disk-0
NAME                          PROPERTY              VALUE                          SOURCE
rpool/data/subvol-101-disk-0  type                  filesystem                     -
rpool/data/subvol-101-disk-0  creation              Sun Jan  1 21:48 2023          -
rpool/data/subvol-101-disk-0  used                  780G                           -
rpool/data/subvol-101-disk-0  available             41.0G                          -
rpool/data/subvol-101-disk-0  referenced            780G                           -
rpool/data/subvol-101-disk-0  compressratio         1.01x                          -
rpool/data/subvol-101-disk-0  mounted               yes                            -
rpool/data/subvol-101-disk-0  quota                 none                           default
rpool/data/subvol-101-disk-0  reservation           none                           default
rpool/data/subvol-101-disk-0  recordsize            128K                           default
rpool/data/subvol-101-disk-0  mountpoint            /rpool/data/subvol-101-disk-0  default
rpool/data/subvol-101-disk-0  sharenfs              off                            default
rpool/data/subvol-101-disk-0  checksum              on                             default
rpool/data/subvol-101-disk-0  compression           on                             inherited from rpool
rpool/data/subvol-101-disk-0  atime                 on                             inherited from rpool
rpool/data/subvol-101-disk-0  devices               on                             default
rpool/data/subvol-101-disk-0  exec                  on                             default
rpool/data/subvol-101-disk-0  setuid                on                             default
rpool/data/subvol-101-disk-0  readonly              off                            default
rpool/data/subvol-101-disk-0  zoned                 off                            default
rpool/data/subvol-101-disk-0  snapdir               hidden                         default
rpool/data/subvol-101-disk-0  aclmode               discard                        default
rpool/data/subvol-101-disk-0  aclinherit            restricted                     default
rpool/data/subvol-101-disk-0  createtxg             172699                         -
rpool/data/subvol-101-disk-0  canmount              on                             default
rpool/data/subvol-101-disk-0  xattr                 sa                             local
rpool/data/subvol-101-disk-0  copies                1                              default
rpool/data/subvol-101-disk-0  version               5                              -
rpool/data/subvol-101-disk-0  utf8only              off                            -
rpool/data/subvol-101-disk-0  normalization         none                           -
rpool/data/subvol-101-disk-0  casesensitivity       sensitive                      -
rpool/data/subvol-101-disk-0  vscan                 off                            default
rpool/data/subvol-101-disk-0  nbmand                off                            default
rpool/data/subvol-101-disk-0  sharesmb              off                            default
rpool/data/subvol-101-disk-0  refquota              1.33T                          local
rpool/data/subvol-101-disk-0  refreservation        none                           default
rpool/data/subvol-101-disk-0  guid                  12088556580609526077           -
rpool/data/subvol-101-disk-0  primarycache          all                            default
rpool/data/subvol-101-disk-0  secondarycache        all                            default
rpool/data/subvol-101-disk-0  usedbysnapshots       0B                             -
rpool/data/subvol-101-disk-0  usedbydataset         780G                           -
rpool/data/subvol-101-disk-0  usedbychildren        0B                             -
rpool/data/subvol-101-disk-0  usedbyrefreservation  0B                             -
rpool/data/subvol-101-disk-0  logbias               latency                        default
rpool/data/subvol-101-disk-0  objsetid              157144                         -
rpool/data/subvol-101-disk-0  dedup                 off                            default
rpool/data/subvol-101-disk-0  mlslabel              none                           default
rpool/data/subvol-101-disk-0  sync                  standard                       inherited from rpool
rpool/data/subvol-101-disk-0  dnodesize             legacy                         default
rpool/data/subvol-101-disk-0  refcompressratio      1.01x                          -
rpool/data/subvol-101-disk-0  written               780G                           -
rpool/data/subvol-101-disk-0  logicalused           788G                           -
rpool/data/subvol-101-disk-0  logicalreferenced     788G                           -
rpool/data/subvol-101-disk-0  volmode               default                        default
rpool/data/subvol-101-disk-0  filesystem_limit      none                           default
rpool/data/subvol-101-disk-0  snapshot_limit        none                           default
rpool/data/subvol-101-disk-0  filesystem_count      none                           default
rpool/data/subvol-101-disk-0  snapshot_count        none                           default
rpool/data/subvol-101-disk-0  snapdev               hidden                         default
rpool/data/subvol-101-disk-0  acltype               posix                          local
rpool/data/subvol-101-disk-0  context               none                           default
rpool/data/subvol-101-disk-0  fscontext             none                           default
rpool/data/subvol-101-disk-0  defcontext            none                           default
rpool/data/subvol-101-disk-0  rootcontext           none                           default
rpool/data/subvol-101-disk-0  relatime              on                             inherited from rpool
rpool/data/subvol-101-disk-0  redundant_metadata    all                            default
rpool/data/subvol-101-disk-0  overlay               on                             default
rpool/data/subvol-101-disk-0  encryption            off                            default
rpool/data/subvol-101-disk-0  keylocation           none                           default
rpool/data/subvol-101-disk-0  keyformat             none                           default
rpool/data/subvol-101-disk-0  pbkdf2iters           0                              default
rpool/data/subvol-101-disk-0  special_small_blocks  0                              default
 
Then the resize of the container disk did not work as expected and only the size in the container config changed, while the actual volume did not. Do you actually have space left on that pool? What is the output of zpool list rpool. What's your pveversion -v
 
Code:
root@lyc:~# zpool list rpool
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  1.36T  1.26T   105G        -         -    54%    92%  1.00x    ONLINE  -

Code:
root@lyc:~# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.108-1-pve)
pve-manager: 7.4-15 (running version: 7.4-15/a5d2a31e)
pve-kernel-5.15: 7.4-4
pve-kernel-5.15.108-1-pve: 5.15.108-1
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.104-1-pve: 5.15.104-2
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-6
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+1
pve-firewall: 4.3-4
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1
 
The issue is that your pool is almost full, only 41G are available for usage. Note that ZFS will reserve also some disk capacity for it to not run completely out of space [0].

When the container is stopped, the WebUI shows you the disk size from the config, which is also the refquota for the volume as you can see from you previous output regarding the disk. While running, the used and available disk size are taken into consideration.

So you have to expand you pool capacity to get the extra volume space.

[0] https://openzfs.github.io/openzfs-d....html?highlight=spa_slop_shift#spa-slop-shift
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!