Problems resizing rbd

This output is the same that I sent before. Sometimes it works and sometimes doesn't.
But after the disk moved to another storage (and back) solved this problem.

Code:
# pct resize 150 rootfs 15G
Resizing image: 100% complete...done.
resize2fs 1.43.4 (31-Jan-2017)
Filesystem at /dev/rbd/vNTDB-Storage/vm-150-disk-0 is mounted on /tmp; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 2
The filesystem on /dev/rbd/vNTDB-Storage/vm-150-disk-0 is now 3932160 (4k) blocks long.
 
The filesystem on /dev/rbd/vNTDB-Storage/vm-150-disk-0 is now 3932160 (4k) blocks long.
This says that it worked and your image is now 15 GB big
 
Here is the next one.
Code:
root@vNTDB-host-1:~# pct resize 264 rootfs 10G
Resizing image: 100% complete...done.
resize2fs 1.43.4 (31-Jan-2017)
The filesystem is already 1310720 (4k) blocks long.  Nothing to do!
 
How is this container configured pct config 264?
 
All containers are cloned from a template (Full clone).
From this reason all containers have the same configuration.

Code:
root@vNTDB-host-1:~# pct config 264
arch: amd64
cores: 1
features: nesting=1
hostname: Nat-system-2-225
memory: 2048
nameserver: 10.102.102.132
net0: name=eth0,bridge=vmbr0,gw=10.102.166.129,hwaddr=16:5B:58:46:BA:56,ip=10.102.166.225/25,type=veth
net1: name=eth1,bridge=vmbr0,hwaddr=9E:71:93:6E:9C:83,ip=10.102.166.226/32,type=veth
net2: name=eth2,bridge=vmbr0,hwaddr=CE:14:AA:50:30:0E,ip=10.102.166.227/32,type=veth
net3: name=eth3,bridge=vmbr0,hwaddr=C2:7A:40:20:D0:BF,ip=10.102.166.228/32,type=veth
net4: name=eth4,bridge=vmbr0,hwaddr=5E:3B:D0:21:6C:B6,ip=10.102.166.229/32,type=veth
net5: name=eth6,bridge=vmbr1,hwaddr=76:61:B7:DE:BD:88,ip=192.168.0.1/24,tag=3920,type=veth
net6: name=eth7,bridge=vmbr1,hwaddr=46:0F:0E:D1:16:A6,ip=192.168.2.1/24,tag=3920,type=veth
onboot: 1
ostype: ubuntu
rootfs: vNTDB-Storage_ct:vm-264-disk-0,size=10G
swap: 512
 
Do I see correctly, some container can be resized and some don't? If so, what is their difference?
 
You are right. There is no relevant different. All container are located on ceph.
Maybe the chep doens't update the parameters of the row device.
But my point of view in this case the task result is not OK.
The user is not informet about this situation.
A additional test is needed to check the correct size before run resize2fs.
 
There is no relevant different.
What would be the irrelevant difference?

Maybe the chep doens't update the parameters of the row device.
I don't understand, what you mean by row device. The size in the config is independent of Ceph and is updated by Proxmox VE.

The user is not informet about this situation.
How should it, the system only sees that the blocks are already the requested size and therefore doesn't need a filesystem expansion.

A additional test is needed to check the correct size before run resize2fs.
resize2fs 1.43.4 (31-Jan-2017) The filesystem is already 1310720 (4k) blocks long. Nothing to do!
But the resize2fs already says, that there is nothing to do as the partition is already the specified size.

root@vNTDB-host-1:~# pct resize 264 rootfs 10G
rootfs: vNTDB-Storage_ct:vm-264-disk-0,size=10G
Are you sure that the container wasn't already on 10 GB?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!