Live migration fails channel 2: open failed: connect failed: open failed

Jethro

New Member
Mar 23, 2023
3
1
3
Hi All,

I'm not sure since when but currently Live migration isn't working anymore with a not really helping error channel 2: open failed: connect failed: open failed

When I shutdown a VM then the migration is working, but I want to do live migrations without downtime as I did before.

Can someone help me out where to search? I can't really find whats causing this issue.

Thanks!


Code:
2023-03-23 18:47:53 starting migration of VM 150 to node 'proxmox1' (10.1.54.31)
2023-03-23 18:47:54 starting VM 150 on remote node 'proxmox1'
2023-03-23 18:47:58 start remote tunnel
2023-03-23 18:47:59 ssh tunnel ver 1
2023-03-23 18:47:59 starting online/live migration on unix:/run/qemu-server/150.migrate
2023-03-23 18:47:59 set migration capabilities
2023-03-23 18:47:59 migration downtime limit: 100 ms
2023-03-23 18:47:59 migration cachesize: 512.0 MiB
2023-03-23 18:47:59 set migration parameters
2023-03-23 18:47:59 start migrate command to unix:/run/qemu-server/150.migrate
channel 2: open failed: connect failed: open failed

2023-03-23 18:48:00 migration status error: failed - Unable to write to socket: Broken pipe
2023-03-23 18:48:00 ERROR: online migrate failure - aborting
2023-03-23 18:48:00 aborting phase 2 - cleanup resources
2023-03-23 18:48:00 migrate_cancel
2023-03-23 18:48:03 ERROR: migration finished with problems (duration 00:00:10)
TASK ERROR: migration problems

Code:
# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.13.19-4-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.3-3
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-12
pve-kernel-5.3: 6.1-6
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-4-pve: 5.13.19-9
pve-kernel-5.4.162-1-pve: 5.4.162-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph: 16.2.11-pve1
ceph-fuse: 16.2.11-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve2
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-3
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-1
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.3
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20221111-2
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.11-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1
 
proxmox-ve: 7.4-1 (running kernel: 5.13.19-4-pve)

Do you run that kernel on purpose? If yes, what is the reason?
Or did you simply not reboot the PVE-host(s) since the last update? (Seems you updated from PVE 7.1 to 7.4?)

I do not know, if this will fix your problem, but I would recommend to give all the PVE-hosts (after the update) a reboot one after the other, not only, but mainly to boot with the new kernel.
 
I didn't keep my self on my rules by first migrating a node empty and then run `apt dist-upgrade`. So now I want to reboot I can't migrate a node empty as the live migration isn't working any more.

I have just rebooted all nodes one by one with some downtime on the VMs but now that everything is back online the live migration is still ginving me the same error. So rebooting didn't resolve the issue.

On the source side I see this in the syslog `QEMU[6162]: kvm: Unable to write to socket: Broken pipe` hopefully this will point someone in the right direction.

Code:
~# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.102-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.3-3
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-12
pve-kernel-5.3: 6.1-6
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-4-pve: 5.13.19-9
pve-kernel-5.4.162-1-pve: 5.4.162-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph: 16.2.11-pve1
ceph-fuse: 16.2.11-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve2
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-3
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-1
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.3
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20221111-2
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.11-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1
 
Last edited:
I found the issue, earlier this year we rolled out an SSH hardening that set AllowTcpForwarding to no. Now that I changed this value to yes the migration works again.

thanks all for helping!
 
  • Like
Reactions: fiona

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!