Migrationsproblem nach Upgrade auf 8.1

rbeudel

Active Member
Jan 24, 2020
31
3
28
Hallo, ich habe 2 Intel Celeron und ein N100 node. Darauf laufen nur 4 kleine VM von 10GB zur Haussteuerung. Die beiden Celeron habe ich heute auf den neuesten Stand gebracht. Als ich die 4 VM einzeln darauf verteilen wollte, lief die erste Migration durch. Die zweite stoppte dann kurz vor Schluss mit dem Fehler "Kann die VM nicht starten". Die VM am Ursprungsort konnte von Hand wieder gestartet werden. Hier die Meldungen im log:
Code:
Dec 09 10:32:06 NUC3 QEMU[1006]: kvm: ../block/io.c:1819: bdrv_co_write_req_prepare: Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed.
Dec 09 10:32:06 NUC3 kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Dec 09 10:32:06 NUC3 kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Dec 09 10:32:06 NUC3 pvedaemon[2416851]: VM 101 qmp command failed - VM 101 qmp command 'query-migrate' failed - client closed connection
Dec 09 10:32:06 NUC3 pvedaemon[2416851]: query migrate failed: VM 101 qmp command 'query-migrate' failed - client closed connection
Dec 09 10:32:07 NUC3 systemd[1]: 101.scope: Deactivated successfully.
Dec 09 10:32:07 NUC3 systemd[1]: 101.scope: Consumed 11h 58min 49.379s CPU time.
Dec 09 10:32:07 NUC3 qmeventd[2417260]: Starting cleanup for 101
Dec 09 10:32:07 NUC3 qmeventd[2417260]: trying to acquire lock...
Dec 09 10:32:08 NUC3 pvedaemon[2416851]: VM 101 qmp command failed - VM 101 not running
Dec 09 10:32:08 NUC3 pvedaemon[2416851]: query migrate failed: VM 101 not running
Dec 09 10:32:09 NUC3 pvedaemon[2416851]: VM 101 qmp command failed - VM 101 not running
Dec 09 10:32:09 NUC3 pvedaemon[2416851]: query migrate failed: VM 101 not running
Dec 09 10:32:10 NUC3 pvedaemon[2416851]: VM 101 qmp command failed - VM 101 not running
Dec 09 10:32:10 NUC3 pvedaemon[2416851]: query migrate failed: VM 101 not running
Dec 09 10:32:11 NUC3 pvedaemon[2416851]: VM 101 qmp command failed - VM 101 not running
Dec 09 10:32:11 NUC3 pvedaemon[2416851]: query migrate failed: VM 101 not running
Dec 09 10:32:12 NUC3 pvedaemon[2416851]: VM 101 qmp command failed - VM 101 not running
Dec 09 10:32:12 NUC3 pvedaemon[2416851]: query migrate failed: VM 101 not running
Dec 09 10:32:12 NUC3 pvedaemon[2416851]: VM 101 qmp command failed - VM 101 not running
Dec 09 10:32:12 NUC3 pvedaemon[2416851]: VM 101 qmp command failed - VM 101 not running
Dec 09 10:32:12 NUC3 pvedaemon[2416851]: VM 101 qmp command failed - VM 101 not running
Dec 09 10:32:12 NUC3 pvedaemon[2416851]: VM 101 qmp command failed - VM 101 not running
Dec 09 10:32:14 NUC3 pmxcfs[783]: [status] notice: received log
Dec 09 10:32:14 NUC3 pmxcfs[783]: [status] notice: received log
Dec 09 10:32:16 NUC3 pmxcfs[783]: [status] notice: received log
Dec 09 10:32:17 NUC3 pmxcfs[783]: [status] notice: received log
Dec 09 10:32:17 NUC3 qmeventd[2417260]: can't lock file '/var/lock/qemu-server/lock-101.conf' - got timeout
Dec 09 10:32:18 NUC3 pvedaemon[2416851]: migration problems
Dec 09 10:32:18 NUC3 pvedaemon[930]: <root@pam> end task UPID:NUC3:0024E0D3:052B0F06:65743390:qmigrate:101:root@pam: migration problems
Dec 09 10:32:41 NUC3 pmxcfs[783]: [status] notice: received log
Dec 09 10:32:43 NUC3 pvedaemon[2417361]: start VM 101: UPID:NUC3:0024E2D1:052B51B3:6574343B:qmstart:101:root@pam:
Dec 09 10:32:43 NUC3 pvedaemon[930]: <root@pam> starting task UPID:NUC3:0024E2D1:052B51B3:6574343B:qmstart:101:root@pam:
Dec 09 10:32:43 NUC3 systemd[1]: Started 101.scope.
Dec 09 10:32:44 NUC3 kernel: device tap101i0 entered promiscuous mode
Dec 09 10:32:44 NUC3 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Dec 09 10:32:44 NUC3 kernel: vmbr0: port 3(fwpr101p0) entered disabled state
Dec 09 10:32:44 NUC3 kernel: device fwln101i0 left promiscuous mode
Dec 09 10:32:44 NUC3 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Dec 09 10:32:44 NUC3 kernel: device fwpr101p0 left promiscuous mode
Dec 09 10:32:44 NUC3 kernel: vmbr0: port 3(fwpr101p0) entered disabled state
Dec 09 10:32:44 NUC3 kernel: vmbr0: port 3(fwpr101p0) entered blocking state
Dec 09 10:32:44 NUC3 kernel: vmbr0: port 3(fwpr101p0) entered disabled state
Dec 09 10:32:44 NUC3 kernel: device fwpr101p0 entered promiscuous mode
Dec 09 10:32:44 NUC3 kernel: vmbr0: port 3(fwpr101p0) entered blocking state
Dec 09 10:32:44 NUC3 kernel: vmbr0: port 3(fwpr101p0) entered forwarding state
Dec 09 10:32:44 NUC3 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Dec 09 10:32:44 NUC3 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Dec 09 10:32:44 NUC3 kernel: device fwln101i0 entered promiscuous mode
Dec 09 10:32:44 NUC3 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Dec 09 10:32:44 NUC3 kernel: fwbr101i0: port 1(fwln101i0) entered forwarding state
Dec 09 10:32:44 NUC3 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Dec 09 10:32:44 NUC3 kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Dec 09 10:32:44 NUC3 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Dec 09 10:32:44 NUC3 kernel: fwbr101i0: port 2(tap101i0) entered forwarding state
Dec 09 10:32:44 NUC3 pvedaemon[930]: <root@pam> end task UPID:NUC3:0024E2D1:052B51B3:6574343B:qmstart:101:root@pam: OK
Bevor ich jetzt weiter mache lasse ich erst einmal alles wie es ist. Der alte Stand ist 8.04. Kann ich etwas überprüfen?
Viele Grüße,
Ralf
 
Hi,
gab es in der VM während der Migration viel IO?
Code:
Dec 09 10:32:06 NUC3 QEMU[1006]: kvm: ../block/io.c:1819: bdrv_co_write_req_prepare: Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed.
Das ist wahrscheinlich ein seltenes (und leider schon lange vorhandenes) Problem mit Disk-Migration. Um es zu lösen, wird es in QEMU 8.2 die nötige Infrastruktur geben: https://lists.nongnu.org/archive/html/qemu-devel/2023-10/msg10397.html
Danach muss noch unser qemu-server ein bisschen angepasst werden.
 
Hi,
gab es in der VM während der Migration viel IO?.
Hallo,eigentlich nicht. Da werden ständig Messerte und Zustände gespeichert. Ich habe es noch mal versucht, jetzt geht es wieder. Ich wollte das nur melden, wenn ich schon nichts bezahle.
Vielen Dank,
Ralf
 
Hallo,eigentlich nicht. Da werden ständig Messerte und Zustände gespeichert. Ich habe es noch mal versucht, jetzt geht es wieder.
Okay, dann hattest Du vermutlich viel Pech mit den Timings.
Ich wollte das nur melden, wenn ich schon nichts bezahle.
Solche Meldungen sind immer willkommen!
 
@rbeudel hast du zufällig die parallelen Jobs hochgedreht beim Migrationsprozess?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!