VM - Resize Disk - BUG?

Mar 1, 2022
21
0
6
30
Hello everyone,

I have a problem when resizing a disk of a VM
Setup:
1 Proxmox (8.2.7) where only one vm is running (proxmox-backup-server)
The VM has 3 disks mounted... disk-1 is the storage for the backup server (datastore)

I have added more disk space via the gui because the datastore is slowly filling up (Disk Action -> Resize -> 5000) and now I have a strange problem:


syslog:
Code:
2024-11-04T09:17:15.694461+01:00 proxmoxsm39 pvedaemon[1773]: <root@pam> starting task UPID:proxmoxsm(...)B:resize:21212:root@pam:
2024-11-04T09:17:15.724420+01:00 proxmoxsm39 pvedaemon[3861830]: <root@pam> update VM 21212: resize --disk scsi1 --size +5000G
2024-11-04T09:17:16.179429+01:00 proxmoxsm39 pvedaemon[1773]: <root@pam> end task UPID:proxmoxsm(...)0B:resize:21212:root@pam: OK

Before resizing the hard disk was 9TB, so after 5000GiB expansion I expected 14TB. But now it shows me 119313G (lsblk: 116.5T) ? The funny thing is that the whole cluster doesn't even have that much memory available...

I have already booted both VM and Proxmox... no change, a
Code:
qm disk rescan --vmid 21212
did not help either...

Any idea what the problem is?


Thank you and best regards
Fabian


1730715333722.png
1730715435651.png
1730715490895.png
 
Hi,
yeah, that shouldn't happen. What does zpool history | grep 21212 show? What about zfs list -o space | grep 21212? Please also share the output of pveversion -v.
 
Last edited:
Hey @fiona :)


Code:
zpool history | grep 21212
2021-12-15.09:19:42 zfs create -s -V 33554432k rpool/data/vm-21212-disk-0
2021-12-15.09:54:45 zfs destroy -r rpool/data/vm-21212-disk-0
2021-12-15.09:57:07 zfs create -s -V 33554432k rpool/data/vm-21212-disk-0
2021-12-15.11:16:45 zfs create -s -V 33554432k rpool/data/vm-21212-disk-1
2021-12-15.11:18:17 zfs destroy -r rpool/data/vm-21212-disk-1
2021-12-15.11:18:22 zfs destroy -r rpool/data/vm-21212-disk-0
2021-12-15.11:19:44 zfs create -s -V 33554432k rpool/data/vm-21212-disk-0
2021-12-15.11:25:28 zfs create -s -V 33554432k rpool/data/vm-212122-disk-0
2021-12-15.11:29:18 zfs destroy -r rpool/data/vm-212122-disk-0
2021-12-15.11:30:00 zfs destroy -r rpool/data/vm-21212-disk-0
2021-12-15.11:32:07 zfs create -s -V 33554432k rpool/data/vm-21212-disk-0
2021-12-15.11:41:33 zfs create -s -V 33554432k rpool/data/vm-21212-disk-1
2021-12-15.11:46:26 zfs destroy -r rpool/data/vm-21212-disk-1
2021-12-15.11:46:31 zfs destroy -r rpool/data/vm-21212-disk-0
2021-12-15.11:48:04 zfs create -s -V 33554432k rpool/data/vm-21212-disk-0
2021-12-15.11:59:34 zfs create -s -V 33554432k rpool/data/vm-21212121-disk-0
2021-12-15.12:38:14 zfs destroy -r rpool/data/vm-21212121-disk-0
2021-12-15.13:31:46 zfs destroy -r rpool/data/vm-21212-disk-0
2021-12-15.13:33:10 zfs create -s -V 33554432k rpool/data/vm-21212-disk-0
2021-12-15.14:08:12 zfs create -s -V 9765388288k rpool/data/vm-21212-disk-1
2022-04-28.10:55:19 zfs create -s -V 1610612736k rpool/data/vm-21212-disk-2
2023-07-14.13:51:22 zfs set volsize=41943040k rpool/data/vm-21212-disk-0
2023-08-04.09:18:02 zfs set volsize=52428800k rpool/data/vm-21212-disk-0
2023-09-28.09:01:28 zfs set volsize=78643200k rpool/data/vm-21212-disk-0
2024-04-25.14:25:33 zfs set volsize=15008268288k rpool/data/vm-21212-disk-1
2024-08-19.13:11:48 zfs set volsize=119865868288k rpool/data/vm-21212-disk-1
2024-11-04.09:17:16 zfs set volsize=125108748288k rpool/data/vm-21212-disk-1

Code:
root@proxmoxsm39:~# zfs list -o space | grep 21212
rpool/data/vm-21212-disk-0  4.38T  66.6G        0B   66.6G             0B         0B
rpool/data/vm-21212-disk-1  4.38T  12.0T        0B   12.0T             0B         0B
rpool/data/vm-21212-disk-2  4.38T   403G        0B    403G             0B         0B


and:

Code:
root@proxmoxsm39:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.12-2-pve)
pve-manager: 8.2.7 (running version: 8.2.7/3e0176e6bb2ade3b)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-13
pve-kernel-5.13: 7.1-9
proxmox-kernel-6.8: 6.8.12-2
proxmox-kernel-6.8.12-2-pve-signed: 6.8.12-2
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
pve-kernel-5.0: 6.0-11
pve-kernel-5.15.152-1-pve: 5.15.152-1
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: 0.8.41
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.3
libpve-guest-common-perl: 5.1.4
libpve-http-server-perl: 5.1.1
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.10
libpve-storage-perl: 8.2.5
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-4
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.5.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.2.0
pve-docs: 8.2.3
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.0.7
pve-firmware: 3.13-2
pve-ha-manager: 4.0.5
pve-i18n: 3.2.3
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.4
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1
 
Code:
2024-04-25.14:25:33 zfs set volsize=15008268288k rpool/data/vm-21212-disk-1
2024-08-19.13:11:48 zfs set volsize=119865868288k rpool/data/vm-21212-disk-1
2024-11-04.09:17:16 zfs set volsize=125108748288k rpool/data/vm-21212-disk-1
Seems like the disk was extended to
15008268288k ≈ 14TiB in April
119865868288k ≈ 111.6 TiB in August
and the last resize was actually only the 5000 GiB.

Probably the actual size was not recorded correctly in the VM configuration for some reason after the resize in August.
 
Unfortunately shrinking disks is notoriously easy to mess up. Best to have a restore-tested backup before attempting anything!

Now the partitions inside the VM are not using the full space, so in principle, it should be fine to shrink it down. I haven't actually shrunk a ZFS volume myself, but there are reports of other users: https://forum.proxmox.com/threads/shrink-a-attached-zfs-disk.46266/post-219599

When picking a value, I'd stay far enough above the 12.0T that is currently being used according to zfs list -o space!
 
Unfortunately shrinking disks is notoriously easy to mess up. Best to have a restore-tested backup before attempting anything!

Now the partitions inside the VM are not using the full space, so in principle, it should be fine to shrink it down. I haven't actually shrunk a ZFS volume myself, but there are reports of other users: https://forum.proxmox.com/threads/shrink-a-attached-zfs-disk.46266/post-219599

When picking a value, I'd stay far enough above the 12.0T that is currently being used according to zfs list -o space!
That works! Thank you very much :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!