LXC migration tries to use old management IP address

Feb 16, 2024
10
1
3
Denmark
We have a 4 node cluster with the PVE cluster network using 2 bonded 25Gbit NICs in VLAN 69 and non routable IP addresses 169.254.69.1-4 (no gw IP) in corosync.conf. We have CEPH on this same bond VLAN 70 using non routable IP addresses 169.254.70.1-4 in ceph.conf.

Our management access to PVE changed IP address range from 172.16.4.210-213 to 172.23.0.10-13, which I'm afraid is causing issues now.
The hosts files are updated with the new local management IP addresses, and we're able to manage the cluster just fine, and also migrate VMS.
All PVE hosts were rebooted after changing the management IP addresses, and we are able to access the PVE management UI from each node.

But I've noticed that trying to migrate LXCs fail due to connection timeout caused by communication attempted on previous management IP addresses.

LXC HA Migration attempt error:
Code:
2024-08-31 06:22:05 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve-node-a' -o 'UserKnownHostsFile=/etc/pve/nodes/pve-node-a/ssh_known_hosts' -o 'GlobalKnownHostsFile=none' root@172.16.4.210 /bin/true
2024-08-31 06:22:05 ssh: connect to host 172.16.4.210 port 22: Connection timed out
2024-08-31 06:22:05 ERROR: migration aborted (duration 00:02:12): Can't connect to destination address using public key

Our new management IP addresses are 172.23.0.10-13 and no longer in the 172.16.4.0/24 subnet.

I suspect we might need to update coded IP addresses in each host host's ssh_known_hosts. I found similar mentions on the forum, but am unable to determine if this the right path, and how to actually go about the change.
 
Last edited:
Hi,
do you have a migration network defined in /etc/pve/datacenter.cfg? Are the IPs mentioned in /etc/hosts correct on all nodes?
 
Hi Fiona

/etc/hosts file on each host references itself and the correct management IP address, e.g.
Code:
172.23.0.10 pve-node-a.<FQDN> pve-node-a
node-b would reference this and so forth:
Code:
172.23.0.11 pve-node-b.<FQDN> pve-node-b

The Datacenter Migration setting is set to default, which should make it use the cluster network. The /etc/pve/datacenter.cfg files only has the keyboard layout for each node.

But then... looking at this, would explain why I have an old management IP remain.
Code:
root@pve-node-a:/etc/pve# cat /etc/pve/.members
{
"nodename": "pve-node-a",
"version": 19,
"cluster": { "name": "pvec", "version": 4, "nodes": 4, "quorate": 1 },
"nodelist": {
  "pve-node-a": { "id": 1, "online": 1, "ip": "172.16.4.210"},
  "pve-node-b": { "id": 2, "online": 1, "ip": "172.23.0.11"},
  "pve-node-c": { "id": 4, "online": 1, "ip": "172.23.0.12"},
  "pve-node-d": { "id": 3, "online": 1, "ip": "172.23.0.13"}
  }
}

In the forums I found a couple of other queries for changing a wrong IP address in /etc/pve/.members, but no clear confirmation if the right way to mitigate is to just correct the IP in the /etc/pve/.members file.

Our corosync.conf has the cluster network on a separate subnet and set of bonded interfaces, so I'm confident that the cluster would stay intact.
Code:
nodelist {
  node {
    name: pve-node-a
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 169.254.69.1
  }
  node {
    name: pve-node-b
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 169.254.69.2
  }
  node {
    name: pve-node-c
    nodeid: 4
    quorum_votes: 1
    ring0_addr: 169.254.69.3
  }
  node {
    name: pve-node-d
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 169.254.69.4
  }
}
 
Last edited:
Looks like you forgot to reboot pve-node-a and it did not update /etc/pve/.members with it's new IP. If pve-node-a has the right IP in /etc/hosts, doing a systemctl restart pve-cluster.service will update /etc/pve/.members cluster wide with the right IP and migration should work.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!