failed - no tunnel IP received

scouser_

New Member
May 15, 2020
4
0
1
33
Hello guys!

I get strange error when I try migrate my VM to another node:
Code:
2020-12-04 10:32:04 found local disk 'ssd1-crypt:203/vm-203-disk-0.raw' (in current VM config)
2020-12-04 10:32:04 found local disk 'ssd1-crypt:203/vm-203-disk-1.raw' (in current VM config)
2020-12-04 10:32:04 found local disk 'ssd1-crypt:203/vm-203-disk-2.raw' (in current VM config)
2020-12-04 10:32:05 copying local disk images
2020-12-04 10:32:05 using a bandwidth limit of 73777152 bps for transferring 'ssd1-crypt:203/vm-203-disk-0.raw'
send/receive failed, cleaning up snapshot(s)..
2020-12-04 10:32:05 ERROR: Failed to sync data - storage migration for 'ssd1-crypt:203/vm-203-disk-0.raw' to storage 'ssd1-crypt' failed - no tunnel IP received
2020-12-04 10:32:05 aborting phase 1 - cleanup resources
2020-12-04 10:32:05 ERROR: found stale volume copy 'ssd1-crypt:203/vm-203-disk-0.raw' on node 'ams-hv10'
2020-12-04 10:32:05 ERROR: migration aborted (duration 00:00:01): Failed to sync data - storage migration for 'ssd1-crypt:203/vm-203-disk-0.raw' to storage 'ssd1-crypt' failed - no tunnel IP received

I've never fail during migration before. What can be a reason?
 
Hi,
did the problem persist? Could you post the output of pveversion -v on both source and target? Do you have a migration entry in your /etc/pve/datacenter.cfg and does it match with the current network configuration?
 
I'm having the same issue. Migrations were working perfectly last year. I have not tried one in a while (Christmas break, etc.) but recently in the last 2 weeks I've tried 3 migrations and all fail.
Bash:
2021-02-11 08:22:19 use dedicated network address for sending migration traffic (172.16.10.23)
2021-02-11 08:22:19 starting migration of VM 199 to node 'STEVE' (172.16.10.23)
2021-02-11 08:22:19 found local, replicated disk 'pool1:vm-199-disk-0' (in current VM config)
2021-02-11 08:22:19 found local, replicated disk 'pool1:vm-199-disk-1' (in current VM config)
2021-02-11 08:22:19 replicating disk images
2021-02-11 08:22:19 start replication job
2021-02-11 08:22:19 guest => VM 199, running => 0
2021-02-11 08:22:19 volumes => pool1:vm-199-disk-0,pool1:vm-199-disk-1
2021-02-11 08:22:22 create snapshot '__replicate_199-0_1613002939__' on pool1:vm-199-disk-0
2021-02-11 08:22:22 create snapshot '__replicate_199-0_1613002939__' on pool1:vm-199-disk-1
2021-02-11 08:22:22 using insecure transmission, rate limit: none
2021-02-11 08:22:22 incremental sync 'pool1:vm-199-disk-0' (__replicate_199-0_1610121793__ => __replicate_199-0_1613002939__)
send/receive failed, cleaning up snapshot(s)..
2021-02-11 08:37:25 delete previous replication snapshot '__replicate_199-0_1613002939__' on pool1:vm-199-disk-0
2021-02-11 08:37:25 delete previous replication snapshot '__replicate_199-0_1613002939__' on pool1:vm-199-disk-1
2021-02-11 08:37:25 end replication job with error: no tunnel IP received
2021-02-11 08:37:25 ERROR: Failed to sync data - no tunnel IP received
2021-02-11 08:37:25 aborting phase 1 - cleanup resources
2021-02-11 08:37:25 ERROR: migration aborted (duration 00:15:07): Failed to sync data - no tunnel IP received
TASK ERROR: migration aborted

proxmox-ve: 6.3-1 (running kernel: 5.4.78-1-pve)
pve-manager: 6.3-2 (running version: 6.3-2/22f57405)
pve-kernel-5.4: 6.3-2
pve-kernel-helper: 6.3-2
pve-kernel-5.4.78-1-pve: 5.4.78-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-1
libpve-common-perl: 6.3-1
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.3-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-backup-client: 1.0.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-1
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-1
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1

I tried two more today and both failed despite saying replication was OK.
I'm going to try rebooting my cluster tonight and see if it improves anything.
 
Hi,
what are your migration settings in the /etc/pve/datacenter.cfg?

If your network value is in CIDR notation, would you mind running the following Perl snippet on the target node?
Code:
use strict;
use warnings;

use PVE::Network;
use Data::Dumper;

my ($cidr) = @ARGV;
print Dumper($cidr);

my $ips = PVE::Network::get_local_ip_from_cidr($cidr);
print Dumper($ips);

Just create a file networkinfo.pl with these contents and use
Code:
perl networkinfo.pl <CIDR>
 
Error powered by /usr/share/perl5/PVE/Storage.pm then migration: type=insecure

How to workaround change in /etc/pve/datacenter.cfg :
Code:
migration: type=secure
 
Hi,
what are your migration settings in the /etc/pve/datacenter.cfg?

If your network value is in CIDR notation, would you mind running the following Perl snippet on the target node?
Code:
use strict;
use warnings;

use PVE::Network;
use Data::Dumper;

my ($cidr) = @ARGV;
print Dumper($cidr);

my $ips = PVE::Network::get_local_ip_from_cidr($cidr);
print Dumper($ips);

Just create a file networkinfo.pl with these contents and use
Code:
perl networkinfo.pl <CIDR>

Sorry for not replying Fabian, I somehow didn't get notification. If I recall correctly, I ended up resolving this by rebooting both nodes, then it was fine.
 
Last edited:
Sorry for not replying Fabian, I somehow didn't get notification. If I recall correctly, I ended up resolving this by rebooting both nodes, then it was fine.
Glad to hear that the issue got resolved. We've made a change to show more information, should the issue appear again in the future.
 
  • Like
Reactions: JamesT
We've got the same issue, and that was because of hardening : we have added a /etc/issue.net file and this file seems to prevent migration from working with the same error message.

Removing this file make it work again for thoses who have the same issue.
 
  • Like
Reactions: adrian_vg
We've got the same issue, and that was because of hardening : we have added a /etc/issue.net file and this file seems to prevent migration from working with the same error message.

Removing this file make it work again for thoses who have the same issue.
Yes, setting an ssh banner will confuse migration with type=insecure (and maybe other things), because it doesn't expect the additional output.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!