Can't migrate VMS to node in cluster?

ncage

Member
Jan 28, 2022
10
0
6
50
So i just noticed this today. I have a 3 node cluster. I tried to migrate a vm from Node1--->Node2 & received an error. Here is the error:

Code:
task started by HA resource agent
2025-10-24 14:51:24 conntrack state migration not supported or disabled, active connections might get dropped
2025-10-24 14:51:24 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve2' -o 'UserKnownHostsFile=/etc/pve/nodes/pve2/ssh_known_hosts' -o 'GlobalKnownHostsFile=none' root@192.168.7.10 /bin/true
2025-10-24 14:51:24 root@192.168.7.10: Permission denied (publickey).
2025-10-24 14:51:24 ERROR: migration aborted (duration 00:00:00): Can't connect to destination address using public key
TASK ERROR: migration aborted

Tried to move move from node1-->node3 which worked fine. Tried to move fm from Node3-->Node2 failed with same error.
Tried to regen the certs on Node2 by backing up my old keys (/root/.ssh/) and regenerating new keys with `pvecm updatecerts`. Didn't fix anything.

I can ssh with ` ssh -o "HostKeyAlias=<Node#>" root@<Node#>`
from
Node1-->Node3
Node3-->Node1
Node2-->Node1
Node2-->Node3

and all work but trying to ssh to Node2 on Node1 & Node3 fails so
Node1-->Node2
Node3-->Node2
all fail with `Permission Denied (publickey)`

I checked the `authorized_keys` on Node2 (/root/.ssh/) to make sure the public keys from Node1 & Node3 are in the file & they are.

Not sure what i need to do from here. If you need a verbose log when i'm trying to ssh to Node2 if that helps let me know. I've had this cluster for awhile & yes it used to work.
 
Last edited: