Disk resize failed and reset the VM

Jul 24, 2023
4
0
1
Hello,

I have resized the hard disk by 50 gb but got the following error:
Code:
TASK ERROR: VM 176 qmp command 'block_resize' failed - client closed connection
This is not the biggest problem, but that the VM was reset afterwards. Unfortunately, a mysql database also broke during the reset.

Before the first reset, there were still a few small changes to the VM pending. I then tested it again, but with the same error.
I didn't want to risk another test, because the VM is in productive use with us. I have therefore enabled the hard disk expansion via an additional disk and lvm. This worked without any problems. We use an nfs storage with a qcow image for the VM.
I also tested the resize on another VM on the same node but without any problems.

Any ideas how to get rid of this problem?

Best regards from stuttgart
Tux ;)
 
Hello,

during today's scheduled maintenance, the VM needs to be rebooted. I used this downtime to stop and restart the VM. After that, resizing worked again. I thought a ha-manager triggered start after the crash would have the same result, but it looks like something else happened during the restart and fixed the problem. Do you have any idea why this is happening and how to fix it properly?

Regards
 
hello,
i have same issue today,
TASK ERROR: VM 420 qmp command 'block_resize' failed - client closed connection, and then vm stopped :(
any ideas?
 
Hi,
please post the VM configuration qm config 420 and cat /etc/pve/storage.cfg and check the system log/journal for additional messages around the time of the issue. You can also apt install systemd-coredump so that we can get a full backtrace should the crash ever happen again.
 
Hello,

qm config 420

Bash:
balloon: 0
boot: order=virtio0;ide2;net0
cores: 4
cpu: host
description: IP VxLAN%3A 192.168.105.20
ide2: none,media=cdrom
memory: 44000
name: vmndbidopaydata2
net0: virtio=36:F9:CA:D6:8F:CD,bridge=vmbr100,tag=105
net1: virtio=66:94:20:78:3B:EF,bridge=vmbr100,tag=112
net2: virtio=02:71:8D:1B:72:DA,bridge=vmbr100,tag=107
net3: virtio=0A:87:F6:76:91:85,bridge=vmbr100,tag=314
numa: 0
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=0cc04630-6750-42b1-a56e-409213d3d828
sockets: 1
virtio0: X_NFS:420/vm-420-disk-0.qcow2,cache=writeback,size=32G
virtio1: X_NFS:420/vm-420-disk-1.qcow2,cache=writeback,size=32G
virtio2: X_NFS:420/vm-420-disk-2.qcow2,cache=writeback,size=80G
virtio3: X_NFS:420/vm-420-disk-3.qcow2,size=100G -> resized disk
vmgenid: 5b43c332-ab7a-4f55-b242-cd8229cd17d0

cat /etc/pve/storage.cfg

Bash:
dir: local
        path /var/lib/vz
        content iso,backup,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

dir: Y_NFS_Fast
        path /mnt/pve-manual/Y_NFS_Fast
        content images
        prune-backups keep-all=1
        shared 1

dir: Y_NFS_Large
        path /mnt/pve-manual/Y_NFS_Large
        content images,backup
        prune-backups keep-all=1
        shared 1

dir: X_NFS
        path /mnt/pve-manual/nfs_X
        content images
        prune-backups keep-all=1
        shared 1

journal


Code:
Nov 05 01:32:18 bnd23 pmxcfs[2663]: [status] notice: received log
Nov 05 01:32:51 bnd23 pvedaemon[4017280]: <root@pam> starting task UPID:bnd23:000D0A7E:2DAFF9E3:672967B3:resize:420:root@pam:
Nov 05 01:32:51 bnd23 pvedaemon[854654]: <root@pam> update VM 420: resize --disk virtio3 --size +12G
Nov 05 01:32:51 bnd23 QEMU[33905]: kvm: ../block/io_uring.c:218: luring_process_completions: Assertion `luringcb->co->ctx == s->aio_context' failed.
Nov 05 01:32:51 bnd23 pvedaemon[854654]: VM 420 qmp command failed - VM 420 qmp command 'block_resize' failed - client closed connection
Nov 05 01:32:51 bnd23 pvedaemon[854654]: VM 420 qmp command 'block_resize' failed - client closed connection
Nov 05 01:32:51 bnd23 pvedaemon[4017280]: <root@pam> end task UPID:bnd23:000D0A7E:2DAFF9E3:672967B3:resize:420:root@pam: VM 420 qmp command 'block_resize' failed - client closed connection
Nov 05 01:32:52 bnd23 systemd[1]: 420.scope: Deactivated successfully.
Nov 05 01:32:52 bnd23 systemd[1]: 420.scope: Consumed 3month 2w 1d 18h 16min 23.312s CPU time.
Nov 05 01:32:52 bnd23 qmeventd[854692]: Starting cleanup for 420
Nov 05 01:32:52 bnd23 ovs-vsctl[854726]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln420i3
Nov 05 01:32:52 bnd23 ovs-vsctl[854726]: ovs|00002|db_ctl_base|ERR|no port named fwln420i3
Nov 05 01:32:52 bnd23 ovs-vsctl[854727]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap420i3
Nov 05 01:32:53 bnd23 ovs-vsctl[854735]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln420i0
Nov 05 01:32:53 bnd23 ovs-vsctl[854735]: ovs|00002|db_ctl_base|ERR|no port named fwln420i0
Nov 05 01:32:53 bnd23 ovs-vsctl[854736]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap420i0
Nov 05 01:32:53 bnd23 ovs-vsctl[854741]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln420i1
Nov 05 01:32:53 bnd23 ovs-vsctl[854741]: ovs|00002|db_ctl_base|ERR|no port named fwln420i1
Nov 05 01:32:53 bnd23 ovs-vsctl[854742]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap420i1
Nov 05 01:32:53 bnd23 ovs-vsctl[854745]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln420i2
Nov 05 01:32:53 bnd23 ovs-vsctl[854745]: ovs|00002|db_ctl_base|ERR|no port named fwln420i2
Nov 05 01:32:53 bnd23 ovs-vsctl[854746]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap420i2
Nov 05 01:32:53 bnd23 qmeventd[854692]: Finished cleanup for 420
Nov 05 01:33:16 bnd23 sshd[854850]: Connection closed by 127.0.0.1 port 32852 [preauth]
Nov 05 01:33:38 bnd23 pvedaemon[3907347]: <root@pam> starting task UPID:bnd23:000D0BB6:2DB00C26:672967E1:resize:420:root@pam:
Nov 05 01:33:38 bnd23 pvedaemon[854966]: <root@pam> update VM 420: resize --disk virtio3 --size +12G
Nov 05 01:33:38 bnd23 pvedaemon[3907347]: <root@pam> end task UPID:bnd23:000D0BB6:2DB00C26:672967E1:resize:420:root@pam: OK
Nov 05 01:33:47 bnd23 pvedaemon[854996]: start VM 420: UPID:bnd23:000D0BD4:2DB00FD2:672967EB:qmstart:420:root@pam:
Nov 05 01:33:47 bnd23 pvedaemon[1589048]: <root@pam> starting task UPID:bnd23:000D0BD4:2DB00FD2:672967EB:qmstart:420:root@pam:
Nov 05 01:33:47 bnd23 systemd[1]: Started 420.scope.
Nov 05 01:33:56 bnd23 pvedaemon[4017280]: VM 420 qmp command failed - VM 420 qmp command 'query-proxmox-support' failed - got timeout
Nov 05 01:34:00 bnd23 pvedaemon[1589048]: VM 420 qmp command failed - VM 420 qmp command 'query-proxmox-support' failed - unable to connect to VM 420 qmp socket - timeout after 51 retries
Nov 05 01:34:03 bnd23 pvestatd[2719]: VM 420 qmp command failed - VM 420 qmp command 'query-proxmox-support' failed - unable to connect to VM 420 qmp socket - timeout after 51 retries
Nov 05 01:34:03 bnd23 pvestatd[2719]: status update time (8.308 seconds)
Nov 05 01:34:05 bnd23 pvedaemon[4017280]: VM 420 qmp command failed - VM 420 qmp command 'query-proxmox-support' failed - unable to connect to VM 420 qmp socket - timeout after 51 retries
Nov 05 01:34:06 bnd23 pvedaemon[3907347]: VM 420 qmp command failed - VM 420 qmp command 'query-proxmox-support' failed - unable to connect to VM 420 qmp socket - timeout after 51 retries
Nov 05 01:34:07 bnd23 kernel: tap420i0: entered promiscuous mode
Nov 05 01:34:07 bnd23 ovs-vsctl[855095]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap420i0
Nov 05 01:34:07 bnd23 ovs-vsctl[855095]: ovs|00002|db_ctl_base|ERR|no port named tap420i0
Nov 05 01:34:07 bnd23 ovs-vsctl[855096]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln420i0
Nov 05 01:34:07 bnd23 ovs-vsctl[855096]: ovs|00002|db_ctl_base|ERR|no port named fwln420i0
Nov 05 01:34:07 bnd23 ovs-vsctl[855097]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- add-port vmbr100 tap420i0 tag=105 -- set Interface tap420i0 mtu_request=1450
Nov 05 01:34:09 bnd23 kernel: tap420i1: entered promiscuous mode
Nov 05 01:34:09 bnd23 ovs-vsctl[855129]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap420i1
Nov 05 01:34:09 bnd23 ovs-vsctl[855129]: ovs|00002|db_ctl_base|ERR|no port named tap420i1
Nov 05 01:34:09 bnd23 ovs-vsctl[855130]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln420i1
Nov 05 01:34:09 bnd23 ovs-vsctl[855130]: ovs|00002|db_ctl_base|ERR|no port named fwln420i1
Nov 05 01:34:09 bnd23 ovs-vsctl[855131]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- add-port vmbr100 tap420i1 tag=112 -- set Interface tap420i1 mtu_request=1450
Nov 05 01:34:10 bnd23 kernel: tap420i2: entered promiscuous mode
Nov 05 01:34:10 bnd23 ovs-vsctl[855150]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap420i2
Nov 05 01:34:10 bnd23 ovs-vsctl[855150]: ovs|00002|db_ctl_base|ERR|no port named tap420i2
Nov 05 01:34:10 bnd23 ovs-vsctl[855151]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln420i2
Nov 05 01:34:10 bnd23 ovs-vsctl[855151]: ovs|00002|db_ctl_base|ERR|no port named fwln420i2
Nov 05 01:34:10 bnd23 ovs-vsctl[855152]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- add-port vmbr100 tap420i2 tag=107 -- set Interface tap420i2 mtu_request=1450
Nov 05 01:34:11 bnd23 pvedaemon[1589048]: VM 420 qmp command failed - VM 420 qmp command 'query-proxmox-support' failed - unable to connect to VM 420 qmp socket - timeout after 51 retries
Nov 05 01:34:11 bnd23 pmxcfs[2663]: [status] notice: received log
 
Thanks, I'll see if I can manage to reproduce the issue! What is the output of pveversion -v | grep kvm?
 
The hypervisor was updated and rebooted, and then the VMs were migrated to it using live migrate so in terori the vm may have previously been running on another node from the cluster, although it seems to me that “in short” the migration restarts the kvm process anyway, am I right?
Sory for my english :(
 
The hypervisor was updated and rebooted, and then the VMs were migrated to it using live migrate so in terori the vm may have previously been running on another node from the cluster, although it seems to me that “in short” the migration restarts the kvm process anyway, am I right?
Yes, after migration the VM will be running as a QEMU instance with the version the migration target had installed.

I'm not able to reproduce the issue with pve-qemu-kvm: 9.0.2-2 and the assertion failure you see
Code:
Nov 05 01:32:51 bnd23 pvedaemon[854654]: <root@pam> update VM 420: resize --disk virtio3 --size +12G
Nov 05 01:32:51 bnd23 QEMU[33905]: kvm: ../block/io_uring.c:218: luring_process_completions: Assertion `luringcb->co->ctx == s->aio_context' failed.
Nov 05 01:32:51 bnd23 pvedaemon[854654]: VM 420 qmp command failed - VM 420 qmp command 'block_resize' failed - client closed connection
is the very same referenced in the fix
https://git.proxmox.com/?p=pve-qemu...4;hb=c1cd6a6221e4322413c768e07995894f4ff012e8
https://issues.redhat.com/browse/RHEL-34618
so I'd be a bit surprised if that is a new issue with the very same error. Can you try reproducing it with a running test VM, i.e. first hotplug a VirtIO block disk and then resize (don't even need to install an OS)?

Are you sure that version was already installed when you migrated the VM to this node? You can check /var/log/apt/history.log and log-rotations thereof.
 
is the very same referenced in the fix
https://git.proxmox.com/?p=pve-qemu...4;hb=c1cd6a6221e4322413c768e07995894f4ff012e8
https://issues.redhat.com/browse/RHEL-34618
so I'd be a bit surprised if that is a new issue with the very same error. Can you try reproducing it with a running test VM, i.e. first hotplug a VirtIO block disk and then resize (don't even need to install an OS)?
On other node i have the same situation yesterday :(
Bash:
Nov 10 20:25:41 bnd28 pvedaemon[191371]: <user@pam> starting task UPID:bnd28:0012FD45:30A0FCCC:673108B5:resize:430:user@pam:
Nov 10 20:25:41 bnd28 pvedaemon[1244485]: <user@pam> update VM 430: resize --disk virtio2 --size +50G
Nov 10 20:25:41 bnd28 QEMU[6277]: kvm: ../block/io_uring.c:218: luring_process_completions: Assertion `luringcb->co->ctx == s->aio_context' failed.
Nov 10 20:25:41 bnd28 pvedaemon[1244485]: VM 430 qmp command failed - VM 430 qmp command 'block_resize' failed - client closed connection
Nov 10 20:25:41 bnd28 pvedaemon[1244485]: VM 430 qmp command 'block_resize' failed - client closed connection
Nov 10 20:25:42 bnd28 pvedaemon[191371]: <user@pam> end task UPID:bnd28:0012FD45:30A0FCCC:673108B5:resize:430:user@pam: VM 430 qmp command 'block_resize' failed - client closed connection
Nov 10 20:25:42 bnd28 systemd[1]: 430.scope: Deactivated successfully.
Nov 10 20:25:42 bnd28 systemd[1]: 430.scope: Consumed 3w 2d 2h 26min 54.701s CPU time.
Nov 10 20:25:43 bnd28 qmeventd[1244512]: Starting cleanup for 430

On the test vm i have no issue:

Code:
update VM 99999999: -virtio0 X_NFS:32,format=qcow2,iothread=on
Formatting '/mnt/pve-manual/x_idop/images/99999999/vm-99999999-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=34359738368 lazy_refcounts=off refcount_bits=16
virtio0: successfully created disk 'X_NFS:99999999/vm-99999999-disk-0.qcow2,iothread=1,size=32G'
TASK OK



Are you sure that version was already installed when you migrated the VM to this node? You can check /var/log/apt/history.log and log-rotations thereof.
Ofc, I checked it.
 
Last edited:
On other node i have the same situation yesterday :(
Bash:
Nov 10 20:25:42 bnd28 systemd[1]: 430.scope: Consumed 3w 2d 2h 26min 54.701s CPU time.
That sounds like the VM was running for a long time (note this is only the CPU time not the actual run time). I'd guess it was still started with a buggy QEMU version, not with a fixed one.
 
Unfortunately, I can not prove how much time the vm worked because as you know it was stopped. :)
You could search the system logs/journal, for the qmstart task.
 
Yup, you have a right
Aug 08 11:25:03 bnd28 qm[6263]: <root@pam> end task UPID:bnd28:0000187A:0001676F:66B48EE5:qmstart:430:root@pam: OK
Server uptime
Bash:
root@bnd28:~# uptime
 15:54:20 up 97 days,  5:44,  1 user,  load average: 5.55, 5.23, 5.07
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!