Wrong Disksize in a LXC after Resize via GUI

Jun 11, 2021
12
3
8
45
Odelzhausen
itsaw.de
Hello all,

Unfortunately, I'm at a bit of a loss at the moment and hope you can help me.

On my server, Proxmox 6.4-8 with ZFS on NVMe is in use.
I recently resized a container using the GUI, but the container still shows me the old size of the volume.

On the host, the old size of the subvoml volume is also displayed with a df -h.

Have I forgotten a step here? Actually, I was of the opinion that an enlargement via GUI also tells the container that it now has more space.

Many thanks in advance

Markus
Edit: A little bit more Information:
Code:
~# zfs list
NAME                        USED  AVAIL     REFER  MOUNTPOINT
rzh-zfs                     857G  3.10G     23.7G  /rzh-zfs
rzh-zfs/subvol-100-disk-0  2.88G  3.10G     2.88G  /rzh-zfs/subvol-100-disk-0
rzh-zfs/subvol-101-disk-0  1.53G  3.10G     1.53G  /rzh-zfs/subvol-101-disk-0
rzh-zfs/subvol-102-disk-0   726M  3.10G      726M  /rzh-zfs/subvol-102-disk-0
rzh-zfs/subvol-103-disk-0  1.66G  3.10G     1.66G  /rzh-zfs/subvol-103-disk-0
rzh-zfs/subvol-103-disk-1  49.1G  3.10G     49.1G  /rzh-zfs/subvol-103-disk-1
rzh-zfs/subvol-103-disk-2   139M  3.10G      139M  /rzh-zfs/subvol-103-disk-2
rzh-zfs/subvol-103-disk-3  9.73G  3.10G     9.73G  /rzh-zfs/subvol-103-disk-3
rzh-zfs/subvol-104-disk-0  3.93G  3.10G     3.93G  /rzh-zfs/subvol-104-disk-0
rzh-zfs/subvol-105-disk-0  3.84G  3.10G     3.84G  /rzh-zfs/subvol-105-disk-0
rzh-zfs/subvol-108-disk-0  2.32G  3.10G     2.32G  /rzh-zfs/subvol-108-disk-0
rzh-zfs/subvol-200-disk-0  1.56G  3.10G     1.56G  /rzh-zfs/subvol-200-disk-0
rzh-zfs/subvol-201-disk-0  1.94G  3.10G     1.94G  /rzh-zfs/subvol-201-disk-0
rzh-zfs/subvol-202-disk-0  2.15G  3.10G     2.15G  /rzh-zfs/subvol-202-disk-0
rzh-zfs/subvol-203-disk-0  2.70G  3.10G     2.70G  /rzh-zfs/subvol-203-disk-0
rzh-zfs/subvol-204-disk-0  7.96G  2.04G     7.96G  /rzh-zfs/subvol-204-disk-0
rzh-zfs/subvol-205-disk-0  1.54G  3.10G     1.54G  /rzh-zfs/subvol-205-disk-0
rzh-zfs/subvol-206-disk-0  2.03G  3.10G     2.03G  /rzh-zfs/subvol-206-disk-0
rzh-zfs/subvol-207-disk-0  1.57G  3.10G     1.57G  /rzh-zfs/subvol-207-disk-0
rzh-zfs/subvol-209-disk-0  1.90G  3.10G     1.90G  /rzh-zfs/subvol-209-disk-0
rzh-zfs/subvol-210-disk-0  22.1G  3.10G     22.1G  /rzh-zfs/subvol-210-disk-0
rzh-zfs/vm-109-disk-0       258G   259G     1.77G  -
rzh-zfs/vm-150-disk-0       284G   201G     85.4G  -
rzh-zfs/vm-211-disk-0      15.5G  5.31G     13.3G  -
rzh-zfs/vm-250-disk-0       155G   119G     38.9G  -
Code:
~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.4-8 (running version: 6.4-8/185e14db)
pve-kernel-5.4: 6.4-2
pve-kernel-helper: 6.4-2
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-4.15: 5.4-6
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.18-1-pve: 5.0.18-3
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-5.0.12-1-pve: 5.0.12-1
pve-kernel-5.0.8-2-pve: 5.0.8-2
pve-kernel-5.0.8-1-pve: 5.0.8-1
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-12-pve: 4.15.18-36
ceph: 14.2.20-pve1
ceph-fuse: 14.2.20-pve1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.4-1
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.8-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-6
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
pve-zsync: 2.2
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
Code:
~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content snippets,vztmpl,backup,iso
        maxfiles 7
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

zfspool: rzh-zfs
        pool rzh-zfs
        content rootdir,images
        nodes pverzh
        sparse 0

pbs: pbs_intern
        datastore itsaw
        server xxxxxxxxxxx
        content backup
        encryption-key xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
        fingerprint xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
        prune-backups keep-all=1
        username xxxxxxxxxxxxxxxxxxxxxxxxxxx
Code:
~# cat /etc/pve/lxc/103.conf
arch: amd64
cores: 4
features: nesting=1
hostname: web
memory: 6144
mp0: rzh-zfs:subvol-103-disk-1,mp=/srv/clouddata,backup=1,size=150G
mp1: rzh-zfs:subvol-103-disk-2,mp=/srv/mysql,backup=1,size=10G
mp2: rzh-zfs:subvol-103-disk-3,mp=/srv/html,backup=1,size=25G

onboot: 1
ostype: debian
rootfs: rzh-zfs:subvol-103-disk-0,size=10G
startup: order=3
swap: 6144
unprivileged: 1
 
Last edited:
And a little more Informations:
Code:
zfs get all rzh-zfs/subvol-103-disk-1
NAME                       PROPERTY              VALUE                       SOURCE
rzh-zfs/subvol-103-disk-1  type                  filesystem                  -
rzh-zfs/subvol-103-disk-1  creation              Mi Jul 24 19:43 2019        -
rzh-zfs/subvol-103-disk-1  used                  49.1G                       -
rzh-zfs/subvol-103-disk-1  available             3.10G                       -
rzh-zfs/subvol-103-disk-1  referenced            49.1G                       -
rzh-zfs/subvol-103-disk-1  compressratio         1.05x                       -
rzh-zfs/subvol-103-disk-1  mounted               yes                         -
rzh-zfs/subvol-103-disk-1  quota                 150G                        local
rzh-zfs/subvol-103-disk-1  reservation           none                        default
rzh-zfs/subvol-103-disk-1  recordsize            128K                        default
rzh-zfs/subvol-103-disk-1  mountpoint            /rzh-zfs/subvol-103-disk-1  default
rzh-zfs/subvol-103-disk-1  sharenfs              off                         default
rzh-zfs/subvol-103-disk-1  checksum              on                          default
rzh-zfs/subvol-103-disk-1  compression           on                          inherited from rzh-zfs
rzh-zfs/subvol-103-disk-1  atime                 on                          default
rzh-zfs/subvol-103-disk-1  devices               on                          default
rzh-zfs/subvol-103-disk-1  exec                  on                          default
rzh-zfs/subvol-103-disk-1  setuid                on                          default
rzh-zfs/subvol-103-disk-1  readonly              off                         default
rzh-zfs/subvol-103-disk-1  zoned                 off                         default
rzh-zfs/subvol-103-disk-1  snapdir               hidden                      default
rzh-zfs/subvol-103-disk-1  aclinherit            restricted                  default
rzh-zfs/subvol-103-disk-1  createtxg             82544                       -
rzh-zfs/subvol-103-disk-1  canmount              on                          default
rzh-zfs/subvol-103-disk-1  xattr                 sa                          local
rzh-zfs/subvol-103-disk-1  copies                1                           default
rzh-zfs/subvol-103-disk-1  version               5                           -
rzh-zfs/subvol-103-disk-1  utf8only              off                         -
rzh-zfs/subvol-103-disk-1  normalization         none                        -
rzh-zfs/subvol-103-disk-1  casesensitivity       sensitive                   -
rzh-zfs/subvol-103-disk-1  vscan                 off                         default
rzh-zfs/subvol-103-disk-1  nbmand                off                         default
rzh-zfs/subvol-103-disk-1  sharesmb              off                         default
rzh-zfs/subvol-103-disk-1  refquota              150G                        local
rzh-zfs/subvol-103-disk-1  refreservation        none                        default
rzh-zfs/subvol-103-disk-1  guid                  7270587841609428227         -
rzh-zfs/subvol-103-disk-1  primarycache          all                         default
rzh-zfs/subvol-103-disk-1  secondarycache        all                         default
rzh-zfs/subvol-103-disk-1  usedbysnapshots       0B                          -
rzh-zfs/subvol-103-disk-1  usedbydataset         49.1G                       -
rzh-zfs/subvol-103-disk-1  usedbychildren        0B                          -
rzh-zfs/subvol-103-disk-1  usedbyrefreservation  0B                          -
rzh-zfs/subvol-103-disk-1  logbias               latency                     default
rzh-zfs/subvol-103-disk-1  objsetid              777                         -
rzh-zfs/subvol-103-disk-1  dedup                 off                         default
rzh-zfs/subvol-103-disk-1  mlslabel              none                        default
rzh-zfs/subvol-103-disk-1  sync                  standard                    default
rzh-zfs/subvol-103-disk-1  dnodesize             legacy                      default
rzh-zfs/subvol-103-disk-1  refcompressratio      1.05x                       -
rzh-zfs/subvol-103-disk-1  written               49.1G                       -
rzh-zfs/subvol-103-disk-1  logicalused           51.4G                       -
rzh-zfs/subvol-103-disk-1  logicalreferenced     51.4G                       -
rzh-zfs/subvol-103-disk-1  volmode               default                     default
rzh-zfs/subvol-103-disk-1  filesystem_limit      none                        default
rzh-zfs/subvol-103-disk-1  snapshot_limit        none                        default
rzh-zfs/subvol-103-disk-1  filesystem_count      none                        default
rzh-zfs/subvol-103-disk-1  snapshot_count        none                        default
rzh-zfs/subvol-103-disk-1  snapdev               hidden                      default
rzh-zfs/subvol-103-disk-1  acltype               posix                       local
rzh-zfs/subvol-103-disk-1  context               none                        default
rzh-zfs/subvol-103-disk-1  fscontext             none                        default
rzh-zfs/subvol-103-disk-1  defcontext            none                        default
rzh-zfs/subvol-103-disk-1  rootcontext           none                        default
rzh-zfs/subvol-103-disk-1  relatime              off                         default
rzh-zfs/subvol-103-disk-1  redundant_metadata    all                         default
rzh-zfs/subvol-103-disk-1  overlay               on                          default
rzh-zfs/subvol-103-disk-1  encryption            off                         default
rzh-zfs/subvol-103-disk-1  keylocation           none                        default
rzh-zfs/subvol-103-disk-1  keyformat             none                        default
rzh-zfs/subvol-103-disk-1  pbkdf2iters           0                           default
rzh-zfs/subvol-103-disk-1  special_small_blocks  0                           default

Apparently the reference is not taken over, can that be?
If so, how do I fix it?
 
Hi ITSAW,

Some times your VM or CT need be resized ?
Are you using partitions or LV on your guests
How do you format the drive?

Don't know your structure good fdisk (inside CT or VM) will give you a go...
 
Last edited:
So, unfortunately, this answer does not add any value.

Because of the ZFS in the container, "fdisk -l" shows nothing at all, and it can't, because the ZFS is not a block-based device.

Maybe a look at the spoiler will help, I think the structure is clear.
 
ok don't panic :)
So what is your build and structure?
Is it VM or CT?
zfs should only help here
How you connect that drive in pve?
Is it pve's zfs or gest zfs?
give some details pls

Proxmox is opensource and can be a baggy. The recent update show that. See my post not related to you https://forum.proxmox.com/threads/c...fo-in-to-centos8-guest-etc-resolv-conf.90744/

The thing in your issue is that pve have storage solved from the beginning this is why I'm saying this can be your VM or CT issue

ok my foult. So you have CT and make complain
Read my post. They are baggy but they are solid about storage.

Give more details

I will try to solve with you
 
Last edited:
Thank you very much for the help.

So then briefly to the setup:

Dedicated root server with Enterprise SSD for the OS and for small tests and 2 Samsung NVMe SSDs as ZFS pool (PVE).

All containers and the three have the "hard drives" on the ZFS pool.
The storage I'm trying to expand here I had budgeted for 50GB, however it has my Nextcloud data on it.
In the PVE GUI, the space has already been expanded to 150GB, however, as described in the initial post, the ZFS does not accept this size.

In a VM with LVM the procedure would be clear to me, I've been in IT for too long :)

I hope that it is just a small mistake on my part and that I can be helped here.

Thanks again

Markus
 
Hi Markus,

Sorry have been busy.

Let's have terminology first in place. Host is your Proxmox PVE and then you have CT where is your Nextcloud, OS is an operating system inside your CT. What is OS inside your CT (and file system you are using for) who don't recognize new disk size but Proxmox host itself see the new size. Am I correct?

CT's are installed from templates and have limited functionality but great for "one app on one machine" setup.

If my suspicion is correct then you just need resize your CT partition. On some file systems this can be done on live system, let's say "on the fly".

Have a look in CT's manual for more details how to gonfigure your LXC container.

Proxmox is build on Debian but probably Proxmox offered templates don't get to much attention.
I mansion some bugs already for ex in CentOS8 templates or lets say how Proxmox templates team handle all of them.
There is nothing related to ZFS in your post in my believe.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!