[SOLVED] Live migration of a VM ends up in error

bluepr0

Well-Known Member
Mar 1, 2019
68
5
48
68
Hi!

I'm currently using Home Assistant OS VM created by the great scripts at https://tteck.github.io/Proxmox/. As I have a Zigbee network, it also has a USB device mapped. I know it's not possible to move it with that USB device because of the way Zigbee networks work. However, when I want to migrate it to my other node, what I usually do is remove the USB mapping and start the migration process.

But the problem is that every time I try to migrate it, I get an error (pasting it below).
Code:
()
2024-04-14 21:02:42 starting migration of VM 101 to node 'pve' (10.0.1.37)
2024-04-14 21:02:42 found local disk 'local-lvm:vm-101-disk-0' (attached)
2024-04-14 21:02:42 found local disk 'local-lvm:vm-101-disk-1' (attached)
2024-04-14 21:02:42 starting VM 101 on remote node 'pve'
2024-04-14 21:02:47 volume 'local-lvm:vm-101-disk-0' is 'local-lvm:vm-101-disk-0' on the target
2024-04-14 21:02:47 volume 'local-lvm:vm-101-disk-1' is 'local-lvm:vm-101-disk-1' on the target
2024-04-14 21:02:47 start remote tunnel
2024-04-14 21:02:48 ssh tunnel ver 1
2024-04-14 21:02:48 starting storage migration
2024-04-14 21:02:48 efidisk0: start migration to nbd:unix:/run/qemu-server/101_nbd.migrate:exportname=drive-efidisk0
drive mirror is starting for drive-efidisk0
drive-efidisk0: transferred 16.0 KiB of 528.0 KiB (3.03%) in 0s
drive-efidisk0: transferred 528.0 KiB of 528.0 KiB (100.00%) in 1s, ready
all 'mirror' jobs are ready
2024-04-14 21:02:49 scsi0: start migration to nbd:unix:/run/qemu-server/101_nbd.migrate:exportname=drive-scsi0
drive mirror is starting for drive-scsi0
drive-scsi0: transferred 149.0 MiB of 32.0 GiB (0.45%) in 53s
drive-scsi0: transferred 416.0 MiB of 32.0 GiB (1.27%) in 54s
drive-scsi0: transferred 696.0 MiB of 32.0 GiB (2.12%) in 55s
drive-scsi0: transferred 977.0 MiB of 32.0 GiB (2.98%) in 56s
drive-scsi0: transferred 1.2 GiB of 32.0 GiB (3.81%) in 57s
drive-scsi0: transferred 1.5 GiB of 32.0 GiB (4.64%) in 58s
drive-scsi0: transferred 1.8 GiB of 32.0 GiB (5.50%) in 59s
drive-scsi0: transferred 2.0 GiB of 32.0 GiB (6.36%) in 1m
drive-scsi0: transferred 2.3 GiB of 32.0 GiB (7.21%) in 1m 1s
drive-scsi0: transferred 2.6 GiB of 32.0 GiB (8.06%) in 1m 2s
drive-scsi0: transferred 2.9 GiB of 32.0 GiB (8.92%) in 1m 3s
drive-scsi0: transferred 3.1 GiB of 32.0 GiB (9.78%) in 1m 4s
drive-scsi0: transferred 3.4 GiB of 32.0 GiB (10.60%) in 1m 5s
drive-scsi0: transferred 3.7 GiB of 32.0 GiB (11.45%) in 1m 6s
drive-scsi0: transferred 3.9 GiB of 32.0 GiB (12.30%) in 1m 7s
drive-scsi0: transferred 4.2 GiB of 32.0 GiB (13.16%) in 1m 8s
drive-scsi0: transferred 4.5 GiB of 32.0 GiB (14.01%) in 1m 9s
drive-scsi0: transferred 4.7 GiB of 32.0 GiB (14.83%) in 1m 10s
drive-scsi0: transferred 5.0 GiB of 32.0 GiB (15.58%) in 1m 11s
drive-scsi0: transferred 5.3 GiB of 32.0 GiB (16.44%) in 1m 12s
drive-scsi0: transferred 5.5 GiB of 32.0 GiB (17.24%) in 1m 13s
drive-scsi0: transferred 5.8 GiB of 32.0 GiB (18.05%) in 1m 14s
drive-scsi0: transferred 6.0 GiB of 32.0 GiB (18.90%) in 1m 15s
drive-scsi0: transferred 6.3 GiB of 32.0 GiB (19.72%) in 1m 16s
drive-scsi0: transferred 6.6 GiB of 32.0 GiB (20.57%) in 1m 17s
drive-scsi0: transferred 6.9 GiB of 32.0 GiB (21.43%) in 1m 18s
drive-scsi0: transferred 7.1 GiB of 32.0 GiB (22.28%) in 1m 19s
drive-scsi0: transferred 7.4 GiB of 32.0 GiB (23.10%) in 1m 20s
drive-scsi0: transferred 7.7 GiB of 32.0 GiB (23.92%) in 1m 21s
drive-scsi0: transferred 7.9 GiB of 32.0 GiB (24.78%) in 1m 22s
drive-scsi0: transferred 8.2 GiB of 32.0 GiB (25.63%) in 1m 23s
drive-scsi0: transferred 8.5 GiB of 32.0 GiB (26.47%) in 1m 24s
drive-scsi0: transferred 8.7 GiB of 32.0 GiB (27.29%) in 1m 25s
drive-scsi0: transferred 9.0 GiB of 32.0 GiB (28.13%) in 1m 26s
drive-scsi0: transferred 9.3 GiB of 32.0 GiB (28.93%) in 1m 27s
drive-scsi0: transferred 9.5 GiB of 32.0 GiB (29.78%) in 1m 28s
drive-scsi0: transferred 9.8 GiB of 32.0 GiB (30.64%) in 1m 29s
drive-scsi0: transferred 10.1 GiB of 32.0 GiB (31.47%) in 1m 30s
drive-scsi0: transferred 10.3 GiB of 32.0 GiB (32.30%) in 1m 31s
drive-scsi0: transferred 10.6 GiB of 32.0 GiB (33.13%) in 1m 32s
drive-scsi0: transferred 10.9 GiB of 32.0 GiB (33.98%) in 1m 33s
drive-scsi0: transferred 11.1 GiB of 32.0 GiB (34.82%) in 1m 34s
drive-scsi0: transferred 11.4 GiB of 32.0 GiB (35.66%) in 1m 35s
drive-scsi0: transferred 11.7 GiB of 32.0 GiB (36.50%) in 1m 36s
drive-scsi0: transferred 12.0 GiB of 32.0 GiB (37.36%) in 1m 37s
drive-scsi0: transferred 12.2 GiB of 32.0 GiB (38.20%) in 1m 38s
drive-scsi0: transferred 12.5 GiB of 32.0 GiB (39.05%) in 1m 39s
drive-scsi0: transferred 12.8 GiB of 32.0 GiB (39.86%) in 1m 40s
drive-scsi0: transferred 13.0 GiB of 32.0 GiB (40.71%) in 1m 41s
drive-scsi0: transferred 13.3 GiB of 32.0 GiB (41.52%) in 1m 42s
drive-scsi0: transferred 13.5 GiB of 32.0 GiB (42.34%) in 1m 43s
drive-scsi0: transferred 13.8 GiB of 32.0 GiB (43.19%) in 1m 44s
drive-scsi0: transferred 14.1 GiB of 32.0 GiB (44.06%) in 1m 45s
drive-scsi0: transferred 14.4 GiB of 32.0 GiB (44.85%) in 1m 46s
drive-scsi0: transferred 14.6 GiB of 32.0 GiB (45.70%) in 1m 47s
drive-scsi0: transferred 14.9 GiB of 32.0 GiB (46.55%) in 1m 48s
drive-scsi0: transferred 15.2 GiB of 32.0 GiB (47.40%) in 1m 49s
drive-scsi0: transferred 15.4 GiB of 32.0 GiB (48.23%) in 1m 50s
drive-scsi0: transferred 15.7 GiB of 32.0 GiB (49.03%) in 1m 51s
drive-scsi0: transferred 16.0 GiB of 32.0 GiB (49.85%) in 1m 52s
drive-scsi0: transferred 16.2 GiB of 32.0 GiB (50.72%) in 1m 53s
drive-scsi0: transferred 16.5 GiB of 32.0 GiB (51.57%) in 1m 54s
drive-scsi0: transferred 16.8 GiB of 32.0 GiB (52.42%) in 1m 55s
drive-scsi0: transferred 17.0 GiB of 32.0 GiB (53.24%) in 1m 56s
drive-scsi0: transferred 17.3 GiB of 32.0 GiB (54.08%) in 1m 57s
drive-scsi0: transferred 17.6 GiB of 32.0 GiB (54.86%) in 1m 58s
drive-scsi0: transferred 17.8 GiB of 32.0 GiB (55.71%) in 1m 59s
drive-scsi0: transferred 18.1 GiB of 32.0 GiB (56.57%) in 2m
drive-scsi0: transferred 18.4 GiB of 32.0 GiB (57.43%) in 2m 1s
drive-scsi0: transferred 18.6 GiB of 32.0 GiB (58.27%) in 2m 2s
drive-scsi0: transferred 18.9 GiB of 32.0 GiB (59.04%) in 2m 3s
drive-scsi0: transferred 19.2 GiB of 32.0 GiB (59.89%) in 2m 4s
drive-scsi0: transferred 19.4 GiB of 32.0 GiB (60.72%) in 2m 5s
drive-scsi0: transferred 19.7 GiB of 32.0 GiB (61.54%) in 2m 6s
drive-scsi0: transferred 20.0 GiB of 32.0 GiB (62.31%) in 2m 7s
drive-scsi0: transferred 20.2 GiB of 32.0 GiB (63.14%) in 2m 8s
drive-scsi0: transferred 20.5 GiB of 32.0 GiB (63.99%) in 2m 9s
drive-scsi0: transferred 20.8 GiB of 32.0 GiB (64.85%) in 2m 11s
drive-scsi0: transferred 21.0 GiB of 32.0 GiB (65.69%) in 2m 12s
drive-scsi0: transferred 21.3 GiB of 32.0 GiB (66.50%) in 2m 13s
drive-scsi0: transferred 21.6 GiB of 32.0 GiB (67.35%) in 2m 14s
drive-scsi0: transferred 21.8 GiB of 32.0 GiB (68.20%) in 2m 15s
drive-scsi0: transferred 22.1 GiB of 32.0 GiB (69.05%) in 2m 16s
drive-scsi0: transferred 22.4 GiB of 32.0 GiB (69.88%) in 2m 17s
drive-scsi0: transferred 22.6 GiB of 32.0 GiB (70.72%) in 2m 18s
drive-scsi0: transferred 22.9 GiB of 32.0 GiB (71.55%) in 2m 19s
drive-scsi0: transferred 23.2 GiB of 32.0 GiB (72.36%) in 2m 20s
drive-scsi0: transferred 23.4 GiB of 32.0 GiB (73.20%) in 2m 21s
drive-scsi0: transferred 23.7 GiB of 32.0 GiB (74.03%) in 2m 22s
drive-scsi0: transferred 24.0 GiB of 32.0 GiB (74.88%) in 2m 23s
drive-scsi0: transferred 24.2 GiB of 32.0 GiB (75.71%) in 2m 24s
drive-scsi0: transferred 24.5 GiB of 32.0 GiB (76.53%) in 2m 25s
drive-scsi0: transferred 24.8 GiB of 32.0 GiB (77.33%) in 2m 26s
drive-scsi0: transferred 25.0 GiB of 32.0 GiB (78.18%) in 2m 27s
drive-scsi0: transferred 25.3 GiB of 32.0 GiB (79.03%) in 2m 28s
drive-scsi0: transferred 25.6 GiB of 32.0 GiB (79.89%) in 2m 29s
drive-scsi0: transferred 25.8 GiB of 32.0 GiB (80.69%) in 2m 30s
drive-scsi0: transferred 26.1 GiB of 32.0 GiB (81.41%) in 2m 31s
drive-scsi0: transferred 26.3 GiB of 32.0 GiB (82.26%) in 2m 32s
drive-scsi0: transferred 26.6 GiB of 32.0 GiB (83.06%) in 2m 33s
drive-scsi0: transferred 26.9 GiB of 32.0 GiB (83.89%) in 2m 34s
drive-scsi0: transferred 27.1 GiB of 32.0 GiB (84.68%) in 2m 35s
drive-scsi0: transferred 27.3 GiB of 32.0 GiB (85.39%) in 2m 36s
drive-scsi0: transferred 27.6 GiB of 32.0 GiB (86.24%) in 2m 37s
drive-scsi0: transferred 27.9 GiB of 32.0 GiB (87.04%) in 2m 38s
drive-scsi0: transferred 28.1 GiB of 32.0 GiB (87.85%) in 2m 39s
drive-scsi0: transferred 28.4 GiB of 32.0 GiB (88.71%) in 2m 40s
drive-scsi0: transferred 28.7 GiB of 32.0 GiB (89.56%) in 2m 41s
drive-scsi0: transferred 28.9 GiB of 32.0 GiB (90.40%) in 2m 42s
drive-scsi0: transferred 29.2 GiB of 32.0 GiB (91.26%) in 2m 43s
drive-scsi0: transferred 29.5 GiB of 32.0 GiB (92.11%) in 2m 44s
drive-scsi0: transferred 29.8 GiB of 32.0 GiB (92.92%) in 2m 45s
drive-scsi0: transferred 30.0 GiB of 32.0 GiB (93.72%) in 2m 46s
drive-scsi0: transferred 30.3 GiB of 32.0 GiB (94.58%) in 2m 47s
drive-scsi0: transferred 30.5 GiB of 32.0 GiB (95.39%) in 2m 48s
drive-scsi0: transferred 30.8 GiB of 32.0 GiB (96.20%) in 2m 49s
drive-scsi0: transferred 31.1 GiB of 32.0 GiB (97.02%) in 2m 50s
drive-scsi0: transferred 31.3 GiB of 32.0 GiB (97.81%) in 2m 51s
drive-scsi0: transferred 31.6 GiB of 32.0 GiB (98.67%) in 2m 52s
drive-scsi0: transferred 31.9 GiB of 32.0 GiB (99.48%) in 2m 53s
drive-scsi0: transferred 32.0 GiB of 32.0 GiB (100.00%) in 2m 54s, ready
all 'mirror' jobs are ready
2024-04-14 21:05:43 starting online/live migration on unix:/run/qemu-server/101.migrate
2024-04-14 21:05:43 set migration capabilities
2024-04-14 21:05:43 migration downtime limit: 100 ms
2024-04-14 21:05:43 migration cachesize: 1.0 GiB
2024-04-14 21:05:43 set migration parameters
2024-04-14 21:05:43 start migrate command to unix:/run/qemu-server/101.migrate
2024-04-14 21:05:44 migration active, transferred 214.6 MiB of 8.0 GiB VM-state, 281.0 MiB/s
2024-04-14 21:05:45 migration active, transferred 482.7 MiB of 8.0 GiB VM-state, 262.0 MiB/s
2024-04-14 21:05:46 migration active, transferred 762.8 MiB of 8.0 GiB VM-state, 281.6 MiB/s
2024-04-14 21:05:47 migration active, transferred 1.0 GiB of 8.0 GiB VM-state, 278.9 MiB/s
2024-04-14 21:05:48 migration active, transferred 1.3 GiB of 8.0 GiB VM-state, 273.7 MiB/s
2024-04-14 21:05:49 migration active, transferred 1.6 GiB of 8.0 GiB VM-state, 278.9 MiB/s
2024-04-14 21:05:50 migration active, transferred 1.8 GiB of 8.0 GiB VM-state, 281.6 MiB/s
2024-04-14 21:05:51 migration active, transferred 2.1 GiB of 8.0 GiB VM-state, 283.7 MiB/s
2024-04-14 21:05:52 migration active, transferred 2.4 GiB of 8.0 GiB VM-state, 283.7 MiB/s
2024-04-14 21:05:53 migration active, transferred 2.6 GiB of 8.0 GiB VM-state, 274.1 MiB/s
2024-04-14 21:05:54 migration active, transferred 2.8 GiB of 8.0 GiB VM-state, 916.7 MiB/s
2024-04-14 21:05:55 migration active, transferred 3.1 GiB of 8.0 GiB VM-state, 282.5 MiB/s
2024-04-14 21:05:56 migration active, transferred 3.4 GiB of 8.0 GiB VM-state, 256.5 MiB/s
2024-04-14 21:05:57 migration active, transferred 3.6 GiB of 8.0 GiB VM-state, 228.6 MiB/s
2024-04-14 21:05:58 migration active, transferred 3.9 GiB of 8.0 GiB VM-state, 296.8 MiB/s
2024-04-14 21:05:59 migration active, transferred 4.2 GiB of 8.0 GiB VM-state, 286.5 MiB/s
2024-04-14 21:06:00 migration active, transferred 4.4 GiB of 8.0 GiB VM-state, 262.2 MiB/s
2024-04-14 21:06:01 migration active, transferred 4.7 GiB of 8.0 GiB VM-state, 276.3 MiB/s
2024-04-14 21:06:02 migration active, transferred 5.0 GiB of 8.0 GiB VM-state, 281.0 MiB/s
2024-04-14 21:06:03 migration active, transferred 5.2 GiB of 8.0 GiB VM-state, 184.3 MiB/s
2024-04-14 21:06:04 migration active, transferred 5.5 GiB of 8.0 GiB VM-state, 281.0 MiB/s
2024-04-14 21:06:05 migration active, transferred 5.8 GiB of 8.0 GiB VM-state, 281.0 MiB/s
2024-04-14 21:06:06 migration active, transferred 6.0 GiB of 8.0 GiB VM-state, 280.4 MiB/s
2024-04-14 21:06:07 migration active, transferred 6.3 GiB of 8.0 GiB VM-state, 266.1 MiB/s
2024-04-14 21:06:08 migration active, transferred 6.5 GiB of 8.0 GiB VM-state, 288.9 MiB/s
2024-04-14 21:06:09 migration active, transferred 6.8 GiB of 8.0 GiB VM-state, 281.3 MiB/s
2024-04-14 21:06:10 migration active, transferred 7.1 GiB of 8.0 GiB VM-state, 283.4 MiB/s
2024-04-14 21:06:11 migration active, transferred 7.3 GiB of 8.0 GiB VM-state, 179.7 MiB/s
2024-04-14 21:06:14 migration active, transferred 7.4 GiB of 8.0 GiB VM-state, 80.1 MiB/s
2024-04-14 21:06:14 xbzrle: send updates to 24063 pages in 9.5 MiB encoded memory, cache-miss 10.31%, overflow 125
2024-04-14 21:06:14 auto-increased downtime to continue migration: 200 ms
2024-04-14 21:06:16 migration active, transferred 7.4 GiB of 8.0 GiB VM-state, 26.5 MiB/s, VM dirties lots of memory: 63.8 MiB/s
2024-04-14 21:06:16 xbzrle: send updates to 49344 pages in 16.8 MiB encoded memory, cache-miss 23.29%, overflow 235
2024-04-14 21:06:16 auto-increased downtime to continue migration: 400 ms
2024-04-14 21:06:17 migration active, transferred 7.4 GiB of 8.0 GiB VM-state, 48.0 MiB/s, VM dirties lots of memory: 57.8 MiB/s
2024-04-14 21:06:17 xbzrle: send updates to 75489 pages in 25.7 MiB encoded memory, cache-miss 8.85%, overflow 391
2024-04-14 21:06:18 average migration speed: 234.6 MiB/s - downtime 266 ms
2024-04-14 21:06:18 migration status: completed
all 'mirror' jobs are ready
drive-efidisk0: Completing block job_id...
drive-efidisk0: Completed successfully.
drive-scsi0: Completing block job_id...
drive-scsi0: Completed successfully.
all 'mirror' jobs are ready
drive-efidisk0: Completing block job_id...
drive-efidisk0: Completed successfully.
drive-scsi0: Completing block job_id...
drive-scsi0: Completed successfully.
all 'mirror' jobs are ready
drive-efidisk0: Completing block job_id...
drive-efidisk0: Completed successfully.
drive-scsi0: Completing block job_id...
drive-scsi0: Completed successfully.
drive-efidisk0: mirror-job finished
drive-scsi0: mirror-job finished
2024-04-14 21:06:21 stopping NBD storage migration server on target.
2024-04-14 21:06:22 ERROR: tunnel replied 'ERR: resume failed - VM 101 not running' to command 'resume 101'
  Logical volume "vm-101-disk-0" successfully removed.
  Logical volume "vm-101-disk-1" successfully removed.
2024-04-14 21:06:26 ERROR: migration finished with problems (duration 00:03:45)
TASK ERROR: migration problems

Any ideas on what could be going wrong?
 
Hi,
please share the output of pveversion -v from both source and target node and the VM configuration qm config <ID>. On the migration target, please check the system logs/journal from around the time the issue happened. It's likely that there is a message about the VM crashing (or other hint why it stopped).
 
Hi,
please share the output of pveversion -v from both source and target node and the VM configuration qm config <ID>. On the migration target, please check the system logs/journal from around the time the issue happened. It's likely that there is a message about the VM crashing (or other hint why it stopped).
hey! thanks for your reply

Node 1 (pve)
PHP:
Linux pve 6.5.13-5-pve #1 SMP PREEMPT_DYNAMIC PMX 6.5.13-5 (2024-04-05T11:03Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Apr 15 20:53:34 CEST 2024 on pts/1
root@pve:~# pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.13-5-pve)
pve-manager: 8.1.10 (running version: 8.1.10/4b06efb5db453f29)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
ceph: 18.2.1-pve2
ceph-fuse: 18.2.1-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.3
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.5
libpve-cluster-perl: 8.0.5
libpve-common-perl: 8.1.1
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.6
libpve-network-perl: 0.9.6
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.1.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.5-1
proxmox-backup-file-restore: 3.1.5-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.5
proxmox-widget-toolkit: 4.1.5
pve-cluster: 8.0.5
pve-container: 5.0.9
pve-docs: 8.1.5
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.11-1
pve-ha-manager: 4.0.3
pve-i18n: 3.2.1
pve-qemu-kvm: 8.1.5-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.1.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2
root@pve:~#

Node 2 (pve2)
Code:
Linux pve2 6.5.13-5-pve #1 SMP PREEMPT_DYNAMIC PMX 6.5.13-5 (2024-04-05T11:03Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Apr 15 20:53:37 CEST 2024 on pts/1
root@pve2:~# pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.13-5-pve)
pve-manager: 8.1.10 (running version: 8.1.10/4b06efb5db453f29)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
ceph: 18.2.2-pve1
ceph-fuse: 18.2.2-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.3
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.5
libpve-cluster-perl: 8.0.5
libpve-common-perl: 8.1.1
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.6
libpve-network-perl: 0.9.6
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.1.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.5-1
proxmox-backup-file-restore: 3.1.5-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.5
proxmox-widget-toolkit: 4.1.5
pve-cluster: 8.0.5
pve-container: 5.0.9
pve-docs: 8.1.5
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.11-1
pve-ha-manager: 4.0.3
pve-i18n: 3.2.1
pve-qemu-kvm: 8.1.5-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.1.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2
root@pve2:~#

I think now the migration goes well. This is because I first removed the USB device, powered off the VM, then turned it on, and did the migration. Is this how it's supposed to be, or is it actually a bug?
 

Attachments

  • 1713207577555.png
    1713207577555.png
    113.9 KB · Views: 9
Here's the log of the successful migration
Code:
2024-04-15 20:53:44 starting migration of VM 101 to node 'pve2' (10.0.1.36)
2024-04-15 20:53:44 found local disk 'local-lvm:vm-101-disk-0' (attached)
2024-04-15 20:53:44 found local disk 'local-lvm:vm-101-disk-1' (attached)
2024-04-15 20:53:44 starting VM 101 on remote node 'pve2'
2024-04-15 20:53:47 volume 'local-lvm:vm-101-disk-0' is 'local-lvm:vm-101-disk-0' on the target
2024-04-15 20:53:47 volume 'local-lvm:vm-101-disk-1' is 'local-lvm:vm-101-disk-1' on the target
2024-04-15 20:53:47 start remote tunnel
2024-04-15 20:53:48 ssh tunnel ver 1
2024-04-15 20:53:48 starting storage migration
2024-04-15 20:53:48 scsi0: start migration to nbd:unix:/run/qemu-server/101_nbd.migrate:exportname=drive-scsi0
drive mirror is starting for drive-scsi0
drive-scsi0: transferred 203.0 MiB of 32.0 GiB (0.62%) in 43s
drive-scsi0: transferred 439.0 MiB of 32.0 GiB (1.34%) in 44s
drive-scsi0: transferred 677.0 MiB of 32.0 GiB (2.07%) in 45s
drive-scsi0: transferred 894.0 MiB of 32.0 GiB (2.73%) in 46s
drive-scsi0: transferred 1.1 GiB of 32.0 GiB (3.39%) in 47s
drive-scsi0: transferred 1.3 GiB of 32.0 GiB (4.10%) in 48s
drive-scsi0: transferred 1.5 GiB of 32.0 GiB (4.80%) in 49s
drive-scsi0: transferred 1.8 GiB of 32.0 GiB (5.52%) in 50s
drive-scsi0: transferred 2.0 GiB of 32.0 GiB (6.22%) in 51s
drive-scsi0: transferred 2.2 GiB of 32.0 GiB (6.92%) in 52s
drive-scsi0: transferred 2.4 GiB of 32.0 GiB (7.60%) in 53s
drive-scsi0: transferred 2.7 GiB of 32.0 GiB (8.30%) in 54s
drive-scsi0: transferred 2.9 GiB of 32.0 GiB (9.01%) in 55s
drive-scsi0: transferred 3.1 GiB of 32.0 GiB (9.71%) in 56s
drive-scsi0: transferred 3.3 GiB of 32.0 GiB (10.41%) in 57s
drive-scsi0: transferred 3.6 GiB of 32.0 GiB (11.14%) in 58s
drive-scsi0: transferred 3.8 GiB of 32.0 GiB (11.84%) in 59s
drive-scsi0: transferred 4.0 GiB of 32.0 GiB (12.54%) in 1m
drive-scsi0: transferred 4.2 GiB of 32.0 GiB (13.23%) in 1m 1s
drive-scsi0: transferred 4.5 GiB of 32.0 GiB (13.93%) in 1m 2s
drive-scsi0: transferred 4.7 GiB of 32.0 GiB (14.66%) in 1m 3s
drive-scsi0: transferred 4.9 GiB of 32.0 GiB (15.40%) in 1m 4s
drive-scsi0: transferred 5.1 GiB of 32.0 GiB (16.08%) in 1m 5s
drive-scsi0: transferred 5.4 GiB of 32.0 GiB (16.75%) in 1m 6s
drive-scsi0: transferred 5.6 GiB of 32.0 GiB (17.45%) in 1m 7s
drive-scsi0: transferred 5.8 GiB of 32.0 GiB (18.17%) in 1m 8s
drive-scsi0: transferred 6.0 GiB of 32.0 GiB (18.88%) in 1m 9s
drive-scsi0: transferred 6.3 GiB of 32.0 GiB (19.59%) in 1m 10s
drive-scsi0: transferred 6.5 GiB of 32.0 GiB (20.30%) in 1m 11s
drive-scsi0: transferred 6.7 GiB of 32.0 GiB (21.01%) in 1m 12s
drive-scsi0: transferred 6.9 GiB of 32.0 GiB (21.71%) in 1m 13s
drive-scsi0: transferred 7.2 GiB of 32.0 GiB (22.42%) in 1m 14s
drive-scsi0: transferred 7.4 GiB of 32.0 GiB (23.12%) in 1m 15s
drive-scsi0: transferred 7.6 GiB of 32.0 GiB (23.84%) in 1m 16s
drive-scsi0: transferred 7.9 GiB of 32.0 GiB (24.57%) in 1m 17s
drive-scsi0: transferred 8.1 GiB of 32.0 GiB (25.29%) in 1m 18s
drive-scsi0: transferred 8.3 GiB of 32.0 GiB (25.97%) in 1m 19s
drive-scsi0: transferred 8.5 GiB of 32.0 GiB (26.67%) in 1m 20s
drive-scsi0: transferred 8.8 GiB of 32.0 GiB (27.39%) in 1m 21s
drive-scsi0: transferred 9.0 GiB of 32.0 GiB (28.12%) in 1m 22s
drive-scsi0: transferred 9.2 GiB of 32.0 GiB (28.84%) in 1m 23s
drive-scsi0: transferred 9.5 GiB of 32.0 GiB (29.54%) in 1m 24s
drive-scsi0: transferred 9.7 GiB of 32.0 GiB (30.22%) in 1m 25s
drive-scsi0: transferred 9.9 GiB of 32.0 GiB (30.92%) in 1m 26s
drive-scsi0: transferred 10.1 GiB of 32.0 GiB (31.61%) in 1m 27s
drive-scsi0: transferred 10.3 GiB of 32.0 GiB (32.33%) in 1m 28s
drive-scsi0: transferred 10.6 GiB of 32.0 GiB (33.04%) in 1m 29s
drive-scsi0: transferred 10.8 GiB of 32.0 GiB (33.75%) in 1m 30s
drive-scsi0: transferred 11.0 GiB of 32.0 GiB (34.46%) in 1m 31s
drive-scsi0: transferred 11.3 GiB of 32.0 GiB (35.17%) in 1m 32s
drive-scsi0: transferred 11.5 GiB of 32.0 GiB (35.89%) in 1m 33s
drive-scsi0: transferred 11.7 GiB of 32.0 GiB (36.60%) in 1m 34s
drive-scsi0: transferred 11.9 GiB of 32.0 GiB (37.31%) in 1m 35s
drive-scsi0: transferred 12.2 GiB of 32.0 GiB (38.01%) in 1m 36s
drive-scsi0: transferred 12.4 GiB of 32.0 GiB (38.72%) in 1m 38s
drive-scsi0: transferred 12.6 GiB of 32.0 GiB (39.43%) in 1m 39s
drive-scsi0: transferred 12.9 GiB of 32.0 GiB (40.15%) in 1m 40s
drive-scsi0: transferred 13.1 GiB of 32.0 GiB (40.88%) in 1m 41s
drive-scsi0: transferred 13.3 GiB of 32.0 GiB (41.59%) in 1m 42s
drive-scsi0: transferred 13.5 GiB of 32.0 GiB (42.32%) in 1m 43s
drive-scsi0: transferred 13.8 GiB of 32.0 GiB (43.03%) in 1m 44s
drive-scsi0: transferred 14.0 GiB of 32.0 GiB (43.75%) in 1m 45s
drive-scsi0: transferred 14.2 GiB of 32.0 GiB (44.43%) in 1m 46s
drive-scsi0: transferred 14.4 GiB of 32.0 GiB (45.13%) in 1m 47s
drive-scsi0: transferred 14.7 GiB of 32.0 GiB (45.83%) in 1m 48s
drive-scsi0: transferred 14.9 GiB of 32.0 GiB (46.56%) in 1m 49s
drive-scsi0: transferred 15.1 GiB of 32.0 GiB (47.26%) in 1m 50s
drive-scsi0: transferred 15.3 GiB of 32.0 GiB (47.95%) in 1m 51s
drive-scsi0: transferred 15.6 GiB of 32.0 GiB (48.63%) in 1m 52s
drive-scsi0: transferred 15.8 GiB of 32.0 GiB (49.33%) in 1m 53s
drive-scsi0: transferred 16.0 GiB of 32.0 GiB (50.03%) in 1m 54s
drive-scsi0: transferred 16.2 GiB of 32.0 GiB (50.74%) in 1m 55s
drive-scsi0: transferred 16.5 GiB of 32.0 GiB (51.41%) in 1m 56s
drive-scsi0: transferred 16.7 GiB of 32.0 GiB (52.12%) in 1m 57s
drive-scsi0: transferred 16.9 GiB of 32.0 GiB (52.84%) in 1m 58s
drive-scsi0: transferred 17.1 GiB of 32.0 GiB (53.54%) in 1m 59s
drive-scsi0: transferred 17.4 GiB of 32.0 GiB (54.24%) in 2m
drive-scsi0: transferred 17.6 GiB of 32.0 GiB (54.96%) in 2m 1s
drive-scsi0: transferred 17.8 GiB of 32.0 GiB (55.66%) in 2m 2s
drive-scsi0: transferred 18.0 GiB of 32.0 GiB (56.37%) in 2m 3s
drive-scsi0: transferred 18.3 GiB of 32.0 GiB (57.08%) in 2m 4s
drive-scsi0: transferred 18.5 GiB of 32.0 GiB (57.78%) in 2m 5s
drive-scsi0: transferred 18.7 GiB of 32.0 GiB (58.48%) in 2m 6s
drive-scsi0: transferred 18.9 GiB of 32.0 GiB (59.16%) in 2m 7s
drive-scsi0: transferred 19.2 GiB of 32.0 GiB (59.86%) in 2m 8s
drive-scsi0: transferred 19.4 GiB of 32.0 GiB (60.57%) in 2m 9s
drive-scsi0: transferred 19.6 GiB of 32.0 GiB (61.27%) in 2m 10s
drive-scsi0: transferred 19.8 GiB of 32.0 GiB (61.98%) in 2m 11s
drive-scsi0: transferred 20.1 GiB of 32.0 GiB (62.69%) in 2m 12s
drive-scsi0: transferred 20.3 GiB of 32.0 GiB (63.41%) in 2m 13s
drive-scsi0: transferred 20.5 GiB of 32.0 GiB (64.14%) in 2m 14s
drive-scsi0: transferred 20.8 GiB of 32.0 GiB (64.84%) in 2m 15s
drive-scsi0: transferred 21.0 GiB of 32.0 GiB (65.54%) in 2m 16s
drive-scsi0: transferred 21.2 GiB of 32.0 GiB (66.24%) in 2m 17s
drive-scsi0: transferred 21.4 GiB of 32.0 GiB (66.95%) in 2m 18s
drive-scsi0: transferred 21.7 GiB of 32.0 GiB (67.64%) in 2m 19s
drive-scsi0: transferred 21.9 GiB of 32.0 GiB (68.35%) in 2m 20s
drive-scsi0: transferred 22.1 GiB of 32.0 GiB (69.05%) in 2m 21s
drive-scsi0: transferred 22.3 GiB of 32.0 GiB (69.76%) in 2m 22s
drive-scsi0: transferred 22.6 GiB of 32.0 GiB (70.47%) in 2m 23s
drive-scsi0: transferred 22.8 GiB of 32.0 GiB (71.19%) in 2m 24s
drive-scsi0: transferred 23.0 GiB of 32.0 GiB (71.92%) in 2m 25s
drive-scsi0: transferred 23.2 GiB of 32.0 GiB (72.60%) in 2m 26s
drive-scsi0: transferred 23.5 GiB of 32.0 GiB (73.33%) in 2m 27s
drive-scsi0: transferred 23.7 GiB of 32.0 GiB (74.03%) in 2m 28s
drive-scsi0: transferred 23.9 GiB of 32.0 GiB (74.74%) in 2m 29s
drive-scsi0: transferred 24.2 GiB of 32.0 GiB (75.45%) in 2m 30s
drive-scsi0: transferred 24.4 GiB of 32.0 GiB (76.18%) in 2m 31s
drive-scsi0: transferred 24.6 GiB of 32.0 GiB (76.90%) in 2m 32s
drive-scsi0: transferred 24.9 GiB of 32.0 GiB (77.63%) in 2m 33s
drive-scsi0: transferred 25.1 GiB of 32.0 GiB (78.34%) in 2m 34s
drive-scsi0: transferred 25.3 GiB of 32.0 GiB (79.04%) in 2m 35s
drive-scsi0: transferred 25.5 GiB of 32.0 GiB (79.72%) in 2m 36s
drive-scsi0: transferred 25.8 GiB of 32.0 GiB (80.42%) in 2m 37s
drive-scsi0: transferred 26.0 GiB of 32.0 GiB (81.13%) in 2m 38s
drive-scsi0: transferred 26.2 GiB of 32.0 GiB (81.83%) in 2m 39s
drive-scsi0: transferred 26.4 GiB of 32.0 GiB (82.52%) in 2m 40s
drive-scsi0: transferred 26.6 GiB of 32.0 GiB (83.22%) in 2m 41s
drive-scsi0: transferred 26.9 GiB of 32.0 GiB (83.92%) in 2m 42s
drive-scsi0: transferred 27.1 GiB of 32.0 GiB (84.63%) in 2m 43s
drive-scsi0: transferred 27.3 GiB of 32.0 GiB (85.34%) in 2m 44s
drive-scsi0: transferred 27.6 GiB of 32.0 GiB (86.04%) in 2m 45s
drive-scsi0: transferred 27.8 GiB of 32.0 GiB (86.72%) in 2m 46s
drive-scsi0: transferred 28.0 GiB of 32.0 GiB (87.45%) in 2m 47s
drive-scsi0: transferred 28.2 GiB of 32.0 GiB (88.16%) in 2m 48s
drive-scsi0: transferred 28.4 GiB of 32.0 GiB (88.83%) in 2m 49s
drive-scsi0: transferred 28.7 GiB of 32.0 GiB (89.54%) in 2m 50s
drive-scsi0: transferred 28.9 GiB of 32.0 GiB (90.26%) in 2m 51s
drive-scsi0: transferred 29.1 GiB of 32.0 GiB (90.98%) in 2m 52s
drive-scsi0: transferred 29.4 GiB of 32.0 GiB (91.70%) in 2m 53s
drive-scsi0: transferred 29.6 GiB of 32.0 GiB (92.39%) in 2m 54s
drive-scsi0: transferred 29.8 GiB of 32.0 GiB (93.06%) in 2m 55s
drive-scsi0: transferred 30.0 GiB of 32.0 GiB (93.74%) in 2m 56s
drive-scsi0: transferred 30.2 GiB of 32.0 GiB (94.43%) in 2m 57s
drive-scsi0: transferred 30.5 GiB of 32.0 GiB (95.16%) in 2m 58s
drive-scsi0: transferred 30.7 GiB of 32.0 GiB (95.85%) in 2m 59s
drive-scsi0: transferred 30.9 GiB of 32.0 GiB (96.56%) in 3m
drive-scsi0: transferred 31.2 GiB of 32.0 GiB (97.28%) in 3m 1s
drive-scsi0: transferred 31.4 GiB of 32.0 GiB (97.98%) in 3m 2s
drive-scsi0: transferred 31.6 GiB of 32.0 GiB (98.71%) in 3m 3s
drive-scsi0: transferred 31.8 GiB of 32.0 GiB (99.42%) in 3m 4s
drive-scsi0: transferred 32.0 GiB of 32.0 GiB (100.00%) in 3m 5s
drive-scsi0: transferred 32.0 GiB of 32.0 GiB (100.00%) in 3m 6s, ready
all 'mirror' jobs are ready
2024-04-15 20:56:54 efidisk0: start migration to nbd:unix:/run/qemu-server/101_nbd.migrate:exportname=drive-efidisk0
drive mirror is starting for drive-efidisk0
drive-efidisk0: transferred 0.0 B of 528.0 KiB (0.00%) in 0s
drive-efidisk0: transferred 528.0 KiB of 528.0 KiB (100.00%) in 1s, ready
all 'mirror' jobs are ready
2024-04-15 20:56:55 starting online/live migration on unix:/run/qemu-server/101.migrate
2024-04-15 20:56:55 set migration capabilities
2024-04-15 20:56:55 migration downtime limit: 100 ms
2024-04-15 20:56:55 migration cachesize: 1.0 GiB
2024-04-15 20:56:55 set migration parameters
2024-04-15 20:56:55 start migrate command to unix:/run/qemu-server/101.migrate
2024-04-15 20:56:56 migration active, transferred 153.0 MiB of 8.0 GiB VM-state, 257.3 MiB/s
2024-04-15 20:56:57 migration active, transferred 390.0 MiB of 8.0 GiB VM-state, 233.2 MiB/s
2024-04-15 20:56:58 migration active, transferred 625.6 MiB of 8.0 GiB VM-state, 233.6 MiB/s
2024-04-15 20:56:59 migration active, transferred 826.1 MiB of 8.0 GiB VM-state, 114.8 MiB/s
2024-04-15 20:57:00 migration active, transferred 1.0 GiB of 8.0 GiB VM-state, 228.6 MiB/s
2024-04-15 20:57:01 migration active, transferred 1.2 GiB of 8.0 GiB VM-state, 233.4 MiB/s
2024-04-15 20:57:02 migration active, transferred 1.5 GiB of 8.0 GiB VM-state, 226.0 MiB/s
2024-04-15 20:57:03 migration active, transferred 1.7 GiB of 8.0 GiB VM-state, 245.4 MiB/s
2024-04-15 20:57:04 migration active, transferred 1.9 GiB of 8.0 GiB VM-state, 242.9 MiB/s
2024-04-15 20:57:05 migration active, transferred 2.1 GiB of 8.0 GiB VM-state, 3.8 GiB/s
2024-04-15 20:57:07 average migration speed: 684.4 MiB/s - downtime 260 ms
2024-04-15 20:57:07 migration status: completed
all 'mirror' jobs are ready
drive-efidisk0: Completing block job_id...
drive-efidisk0: Completed successfully.
drive-scsi0: Completing block job_id...
drive-scsi0: Completed successfully.
drive-efidisk0: mirror-job finished
drive-scsi0: mirror-job finished
2024-04-15 20:57:08 stopping NBD storage migration server on target.
  Logical volume "vm-101-disk-0" successfully removed.
  Logical volume "vm-101-disk-1" successfully removed.
2024-04-15 20:57:16 migration finished successfully (duration 00:03:33)
TASK OK

However, now another thing is happening. This VM, while it's up, became unresponsive and it's using the full CPU.
View attachment 66377
 
It's so hung up that I had to delete the lock file to be able to stop it.

Bash:
root@pve2:~# qm stop 101
trying to acquire lock...
can't lock file '/var/lock/qemu-server/lock-101.conf' - got timeout
root@pve2:~# rm -rf /var/lock/qemu-server/lock-101.conf
root@pve2:~# qm stop 101
root@pve2:~#
 
I was able to replicate this by using a brand new VM from https://tteck.github.io/Proxmox/ > Home Assistant OS VM.

Installed it on one node, let it start, and then, while the migration went well, the VM is now unresponsive.

Not sure if it's only happening to me or there's some actual bug!
 
Please share the VM configuration and the part from the system log around the time of the problematic migration, i.e. first one where the VM crashed. Do you have different physical CPUs on both nodes? See the CPU Type section from: https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_cpu
 
Thanks @fiona!

I'm sorry if I'm not understanding everything correctly. I'm not totally clear on how to get the system log, is it with dmesg? Or should I execute another command?

VM configuration is as follows (this is the default configuration from the @tteckster script):
- CPU:
- `pve`: 8 x Intel(R) Core(TM) i7-6700T CPU @ 2.80GHz (1 Socket)
- `pve2`: 4 x Intel(R) N100 (1 Socket)

1713269100614.png
 
Last edited:
Attaching dmesg -T output from around the time of the migration. I hope this helps!

I'm moving VM 104 (haos 12.2) from host pve to pve2

pve log
Bash:
[Tue Apr 16 14:11:24 2024] vmbr0: port 5(tap104i0) entered blocking state
[Tue Apr 16 14:11:24 2024] vmbr0: port 5(tap104i0) entered forwarding state
[Tue Apr 16 14:17:22 2024] tap104i0: left allmulticast mode
[Tue Apr 16 14:17:22 2024] vmbr0: port 5(tap104i0) entered disabled state
[Tue Apr 16 14:17:33 2024] tap104i0: entered promiscuous mode
[Tue Apr 16 14:17:33 2024] vmbr0: port 5(tap104i0) entered blocking state
[Tue Apr 16 14:17:33 2024] vmbr0: port 5(tap104i0) entered disabled state
[Tue Apr 16 14:17:33 2024] tap104i0: entered allmulticast mode
[Tue Apr 16 14:17:33 2024] vmbr0: port 5(tap104i0) entered blocking state
[Tue Apr 16 14:17:33 2024] vmbr0: port 5(tap104i0) entered forwarding state
[Tue Apr 16 14:17:38 2024] kvm_msr_ignored_check: 10 callbacks suppressed
[Tue Apr 16 14:17:38 2024] kvm: kvm [316260]: ignored rdmsr: 0x492 data 0x0
[Tue Apr 16 14:17:38 2024] kvm: kvm [316260]: ignored rdmsr: 0x1c9 data 0x0
[Tue Apr 16 14:17:38 2024] kvm: kvm [316260]: ignored wrmsr: 0x1c9 data 0x3
[Tue Apr 16 14:17:38 2024] kvm: kvm [316260]: ignored rdmsr: 0x1c9 data 0x0
[Tue Apr 16 14:17:38 2024] kvm: kvm [316260]: ignored rdmsr: 0x1a6 data 0x0
[Tue Apr 16 14:17:38 2024] kvm: kvm [316260]: ignored wrmsr: 0x1a6 data 0x11
[Tue Apr 16 14:17:38 2024] kvm: kvm [316260]: ignored rdmsr: 0x1a6 data 0x0
[Tue Apr 16 14:17:38 2024] kvm: kvm [316260]: ignored rdmsr: 0x1a7 data 0x0
[Tue Apr 16 14:17:38 2024] kvm: kvm [316260]: ignored wrmsr: 0x1a7 data 0x11
[Tue Apr 16 14:17:38 2024] kvm: kvm [316260]: ignored rdmsr: 0x1a7 data 0x0
[Tue Apr 16 14:22:00 2024] tap104i0: left allmulticast mode
[Tue Apr 16 14:22:00 2024] vmbr0: port 5(tap104i0) entered disabled state
root@pve:~#

pve2 log
Code:
[Tue Apr 16 14:14:25 2024] tap104i0: left allmulticast mode
[Tue Apr 16 14:14:25 2024] vmbr0: port 4(tap104i0) entered disabled state
[Tue Apr 16 14:18:30 2024] tap104i0: entered promiscuous mode
[Tue Apr 16 14:18:30 2024] vmbr0: port 4(tap104i0) entered blocking state
[Tue Apr 16 14:18:30 2024] vmbr0: port 4(tap104i0) entered disabled state
[Tue Apr 16 14:18:30 2024] tap104i0: entered allmulticast mode
[Tue Apr 16 14:18:30 2024] vmbr0: port 4(tap104i0) entered blocking state
[Tue Apr 16 14:18:30 2024] vmbr0: port 4(tap104i0) entered forwarding state
[Tue Apr 16 14:21:53 2024] x86/split lock detection: #AC: CPU 1/KVM/207382 took a split_lock trap at address: 0xbfebd050

VM is at 100% CPU, I am able to stop it only by deleting the lock file (shutdown doesn't respond).
1713270293773.png
 
I see you have Processors Type set to host. Try changing that to x86-64-v2-AES
(in GUI; VM, Hardware, Processors, Edit, Type (dropdown) x86-64-v2-AES)
 
  • Like
Reactions: belkitos and fiona
I see you have Processors Type set to host. Try changing that to x86-64-v2-AES
(in GUI; VM, Hardware, Processors, Edit, Type (dropdown) x86-64-v2-AES)
This actually seems to have solved the problem. (Probably the problem was me, haha.)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!