LXC not showing the correct size after resize

junukwon

New Member
Nov 9, 2023
3
0
1
Hello, I've just resized a HDD RAID on my Proxmox Machine.
Then I resized the lvm pool using lvextend, and then I reszed the CT's volume from 5000G to 6000G via the web ui (resources -> select mp -> volume action -> resize).

However, the problem is that inside the LXC using df-h it still counts the volume as a 5000G one.

Therefore I'm here seeking help for the situation.
It's quite frustrating as normally just pressing resize on the web ui did everything. Also I confirmed the log that the task ended succesfully.

I've attached the captures & outputs of several disk-related commands below. I'm running PVE 8.1.3 on a Dell R740xd.


Any suggestions are welcomed,
Thanks in advance!

From PVE, LXC > Resources
1711854085344.png

Node storage setup
1711854170399.png

1711854194415.png


Inside the Host:


Code:
root@server:~# lsblk
NAME                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                             8:0    0   3.5T  0 disk
└─sda3                          8:3    0   3.5T  0 part
  ├─data-data_tmeta           252:2    0  15.9G  0 lvm
  │ └─data-data-tpool         252:8    0   3.5T  0 lvm
  (...)
sdb                             8:16   0   7.3T  0 disk
├─hdd-hdd_tmeta               252:6    0  15.9G  0 lvm
│ └─hdd-hdd-tpool             252:49   0   7.2T  0 lvm
│   ├─hdd-hdd                 252:50   0   7.2T  1 lvm
│   └─hdd-vm--128--disk--0    252:51   0   5.9T  0 lvm
└─hdd-hdd_tdata               252:7    0   7.2T  0 lvm
  └─hdd-hdd-tpool             252:49   0   7.2T  0 lvm
    ├─hdd-hdd                 252:50   0   7.2T  1 lvm
    └─hdd-vm--128--disk--0    252:51   0   5.9T  0 lvm


Code:
root@server:~# lvs
  LV            VG     Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  (...)
  data          data   twi-aotz--  <3.46t               19.40  1.45                           
  vm-152-disk-0 data   Vwi-a-tz-- 128.00g data          58.02                                
  vm-153-disk-0 data   Vwi-a-tz--  64.00g data          7.13                                 
  hdd           hdd    twi-aotz--   7.24t               64.09  11.07                         
  vm-128-disk-0 hdd    Vwi-aotz--  <5.86t hdd           79.24                                
  nvme2t        nvme2t twi-aotz--  <1.79t               90.37  3.33                          
  vm-108-disk-0 nvme2t Vwi-aotz-- 128.00g nvme2t        99.96                                
  vm-112-disk-0 nvme2t Vwi-a-tz-- 128.00g nvme2t        95.17                                
  vm-128-disk-0 nvme2t Vwi-aotz-- 128.00g nvme2t        99.95                                
  vm-151-disk-0 nvme2t Vwi-a-tz--   1.25t nvme2t        99.75                                
  root          pve    -wi-ao---- 214.50g                                                    
  swap          pve    -wi-ao----   8.00g



Inside the LXC:

Code:
root@rclone:/# df -h
Filesystem                           Size  Used Avail Use% Mounted on
/dev/mapper/data-vm--128--disk--0    7.8G  785M  6.6G  11% /
/dev/mapper/hdd-vm--128--disk--0     4.9T  4.6T   23M 100% /hdd
/dev/mapper/nvme2t-vm--128--disk--0  125G  119G   99M 100% /cache
none                                 492K  4.0K  488K   1% /dev
efivarfs                             304K  210K   90K  71% /sys/firmware/efi/efivars
tmpfs                                 63G     0   63G   0% /dev/shm
tmpfs                                 26G   80K   26G   1% /run
tmpfs                                5.0M     0  5.0M   0% /run/lock


Code:
root@rclone:/# lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda           8:0    0   3.5T  0 disk
`-sda3        8:3    0   3.5T  0 part
sdb           8:16   0   7.3T  0 disk
sdc           8:32   0 223.5G  0 disk
|-sdc1        8:33   0  1007K  0 part
|-sdc2        8:34   0     1G  0 part
`-sdc3        8:35   0 222.5G  0 part
nvme2n1     259:0    0 931.5G  0 disk
|-nvme2n1p1 259:1    0 931.5G  0 part
`-nvme2n1p9 259:2    0     8M  0 part
nvme1n1     259:3    0 931.5G  0 disk
|-nvme1n1p1 259:4    0 931.5G  0 part
`-nvme1n1p9 259:5    0     8M  0 part
nvme0n1     259:6    0   1.8T  0 disk
 
Last edited:
The file system was not resized.

Can you resize it by running the following commands on the host?
Code:
e2fsck -f /dev/pve/vm-128-disk-0
resize2fs /dev/pve/vm-128-disk-0
 
  • Like
Reactions: junukwon
@fschauer Thanks for the reply! Actually I solved the problem before the post was approved but thought leaving the post would be better (couldn't modify at that time).

And you're correct, executing resize2fs made the lxc to properly detect the increased space.

I still have doubts though: why doesn't the webui perform the job?

When I try the same with same/other LXCs with different mount points from my ssd lvm, it automatically increases the size just by clicking the extend button from web ui without the need of manually executing resize2fs.

Both are lvm, I wonder where the difference comes from.
 
What was the output of the task log when the file system was not resized? Did it happen to say "Failed to update the container's filesystem"? Was the container running during the resize?
 
What was the output of the task log when the file system was not resized? Did it happen to say "Failed to update the container's filesystem"? Was the container running during the resize?
I currently don't have the output, but I double checked that nothing went wrong and it said "TASK OK" at the last.

Did the resize twice (5000G -> 6000G and 6000G -> 6010G) and both went well, task exited successfully, and wasn't resized until I manually did it.
 
Could you please provide the task log?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!