Hi, I've searched extensively on google and in the forums and seen a couple of similar posts but nothing that is specifically matching my issue.
I've already tried doing pve updatecerts -f and rebooting nodes. pveupdate certs fixes the ssh shell between hosts for a while, but doesn't fix the error on migrating a VM. After a while, even ssh shell stops working.
My issue details:
I have a cluster with 3 nodes. Node1 has all the VMs on it. I was able to offline migrate a VM before adding the 3rd node, now I can't anymore
From Node1 (before pvecm updatecerts)
Action: "ssh node2" < ssh possible dns spoofing detected
Action: "ssh node3" < ssh possible dns spoofing detected
From node1 (after pvecm updatecerts)
Action: "ssh node2" < OK
Action: "ssh node3" < OK
Action: web gui, select node2, browse to system to view hosts file or syslog etc < OK
Action: web gui, select node3, browse to system to view hosts file or syslog etc < connection error 596
From any node web gui
Action: initiate online OR offline migration from node1 to any node: ssh possible dns spoofing detected.
When trying to view items under the System menu of node3 in the web console, doesn't matter if I'm doing this from the web console on node1 or node2, I get this error: "Connection error 596: tls_process_server_certificate: certificate verify failed". At the same time, I can click the shell menu and it works ok. At the same time, if I do "ssh node3" it gives the standard ssh key mismatch error "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!"
At this time, if I do pvecm updatecerts (with or without --force makes no difference) it will fix the "ssh nodeX" for a while, until that seems to revert after some time, I'm not sure how long. maybe 30 minutes, maybe an hour or a couple of hours.
I've tried deleting /etc/ssh/ssh_known_hosts and letting pvecm updatecerts regenerate it, I've tried pvecm updatecerts -f on all 3 nodes then rebooting node2 and node3.
I've compared the public key from the affected target node against whats in the ssh_known_hosts file on the source host, and both are the same.
Need an expert and experienced hand here - many thanks in advance.
I've already tried doing pve updatecerts -f and rebooting nodes. pveupdate certs fixes the ssh shell between hosts for a while, but doesn't fix the error on migrating a VM. After a while, even ssh shell stops working.
My issue details:
I have a cluster with 3 nodes. Node1 has all the VMs on it. I was able to offline migrate a VM before adding the 3rd node, now I can't anymore
From Node1 (before pvecm updatecerts)
Action: "ssh node2" < ssh possible dns spoofing detected
Action: "ssh node3" < ssh possible dns spoofing detected
From node1 (after pvecm updatecerts)
Action: "ssh node2" < OK
Action: "ssh node3" < OK
Action: web gui, select node2, browse to system to view hosts file or syslog etc < OK
Action: web gui, select node3, browse to system to view hosts file or syslog etc < connection error 596
From any node web gui
Action: initiate online OR offline migration from node1 to any node: ssh possible dns spoofing detected.
When trying to view items under the System menu of node3 in the web console, doesn't matter if I'm doing this from the web console on node1 or node2, I get this error: "Connection error 596: tls_process_server_certificate: certificate verify failed". At the same time, I can click the shell menu and it works ok. At the same time, if I do "ssh node3" it gives the standard ssh key mismatch error "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!"
At this time, if I do pvecm updatecerts (with or without --force makes no difference) it will fix the "ssh nodeX" for a while, until that seems to revert after some time, I'm not sure how long. maybe 30 minutes, maybe an hour or a couple of hours.
I've tried deleting /etc/ssh/ssh_known_hosts and letting pvecm updatecerts regenerate it, I've tried pvecm updatecerts -f on all 3 nodes then rebooting node2 and node3.
I've compared the public key from the affected target node against whats in the ssh_known_hosts file on the source host, and both are the same.
Need an expert and experienced hand here - many thanks in advance.