KVM disk resize online error (CEPH)

Sep 20, 2016
2
0
1
Hi!,

I am having problems when I try to resize disks of a KVM machine with ceph backed disks.
While the server is powered off I can resize the disks without problems, but when it is online I get an error.

The kvm definition is:

Code:
# cat /etc/pve/qemu-server/164.conf
bootdisk: scsi0
cores: 6
ide2: pve-isos:iso/debian-8.4.0-amd64-netinst.iso,media=cdrom
memory: 10240
name: test12
ostype: l26
scsi0: file=ceph_vmtier-10:vm-164-disk-1,discard=on,size=47G
scsi2: file=S2-sasr10-mirror0:vm-164-disk-1,discard=on,size=9G
virtio0: file=ceph_vmtier-10:vm-164-disk-2,size=6G
scsihw: virtio-scsi-single
smbios1: uuid=36e54e93-070a-46b1-86b7-2ab73e506f83
sockets: 1

When it is running it cannot resize the disk.
ceph backed disk (scsi driver)
Code:
# qm resize 164 scsi0 +1G
VM 164 qmp command 'block_resize' failed - Could not resize: Invalid argument

ceph backed disk (virtio driver)
Code:
# qm resize 164 virtio0 +1G
VM 164 qmp command 'block_resize' failed - Could not resize: Invalid argument

LVM backed disk (scsi driver)
Code:
# qm resize 164 scsi2 +1G
Size of logical volume S2vgsasr10m0/vm-164-disk-1 changed from 8.00 GiB (2048 extents) to 9.00 GiB (2304 extents).
  Logical volume vm-164-disk-1 successfully resized

And when the server is not running I can resize the disk:
Code:
# qm stop 164
# qm resize 164 scsi0 +1G
/dev/rbd12
Resizing image: 100% complete...done.

Code:
# qm resize 164 virtio0 +1G
/dev/rbd13
Resizing image: 100% complete...done.

My proxmox versions:
Code:
proxmox-ve: 4.2-51 (running kernel: 4.4.8-1-pve)
pve-manager: 4.2-5 (running version: 4.2-5/7cf09667)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.8-1-pve: 4.4.8-51
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-75
pve-firmware: 1.1-8
libpve-common-perl: 4.0-62
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-17
pve-container: 1.0-64
pve-firewall: 2.0-27
pve-ha-manager: 1.0-31
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
fence-agents-pve: not correctly installed
ceph: 0.94.7-1~bpo80+1

This behaviour is not happening when I resize a running LXC container with a ceph backed disk.

Thanks and best regards
 
works just fine here - could you upgrade your packages to the current version and see if you still experience this issue?
 
It seems you're missing the hotplug definition in your VM settings. In the GUI you should enable Hotplug support for at least Disk at the Options section of VM ID 164. Something like this should appear in 164.conf:
Code:
hotplug: disk,network,usb
Stopping and starting the VM is necessary to activate these settings. After that you should be able to resize your disk while the VM is running.
 
It seems you're missing the hotplug definition in your VM settings. In the GUI you should enable Hotplug support for at least Disk at the Options section of VM ID 164. Something like this should appear in 164.conf:
Code:
hotplug: disk,network,usb
Stopping and starting the VM is necessary to activate these settings. After that you should be able to resize your disk while the VM is running.

you don't need hotplug option to online resize disk.

hotplug option is to remove/add disk online.
 
Hi!

thanks for your replies, I've been busy the last days and I've had very little time to do tests, but finally I managed to find out the problem. The error is only coming when the KRBD option is enabled in the storage definition.

I've tested it in PVE 4.2 and PVE 4.3 with the same result:

Code:
pve-manager/4.3-1/e7cdc165 (running kernel: 4.4.19-1-pve)

KRBD is required for LXC so I would like to have it set.

Is it a bug?

Best Regards
 
Hi!

thanks for your replies, I've been busy the last days and I've had very little time to do tests, but finally I managed to find out the problem. The error is only coming when the KRBD option is enabled in the storage definition.

I've tested it in PVE 4.2 and PVE 4.3 with the same result:

Code:
pve-manager/4.3-1/e7cdc165 (running kernel: 4.4.19-1-pve)

KRBD is required for LXC so I would like to have it set.

Is it a bug?

Best Regards

you should probably add a second rbd pool and use that without KRBD for Qemu - KRBD does not support most of the newer features of Ceph, so after the switch to Jewel this will be even more relevant. note that pools do not take extra/separate space in ceph, so this is not a problem.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!