Host Node ID error after upgrade from 7.2 to 7.3

HiTekAgPilot

New Member
Feb 24, 2023
2
0
1
Last night I updated my Lab from Proxmox VE 7.2 to 7.3. My lab consists of four (4) nodes (pve01,pve02,pve03,pve04). After performing this update (pve non subscription) I started getting the following when attempting to access pve02 from the Proxmox webui from my assigned 'proxmox.networkname.net' (pve01):

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
SHA256:UfFtQHGwtY8NO7xPgD9rolK2dFewRHJ2KTEYopiBVy4.
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending RSA key in /etc/ssh/ssh_known_hosts:3
remove with:
ssh-keygen -f "/etc/ssh/ssh_known_hosts" -R "pve02"
RSA host key for pve02 has changed and you have requested strict checking.
Host key verification failed.
TASK ERROR: Failed to run vncproxy.

I have found that I get the same error if I log into the Webui from pve01 (above), pve03 and pve04.

I have searched to find the proper way to fix this with limited success.

I performed the 'pvecm updatecerts' on all four nodes. After this I was able to get the pve02 server shell and able to get the shell from the single container I have on pve02. However, I continue to get the same message (above) when I attempt to access the console (vncproxy). In the 'console' window I get the red band at the top "Failed to connect to server" and the above message is in the Cluster Log.

It would appear that the update on pve02 changed something (I wish I knew what). I can recreate the above from all three (3) other nodes (pve01,pve03,pve04) when attempting to access node pve02 VM consoles.

I am now no longer able to migrate templates, VMs and Containers to/from pve02 to the rest of the cluster.

Thank You in advance for your help

MD
 
I found this forum post useful in resolving my issue:
https://forum.proxmox.com/threads/task-error-failed-to-run-vncproxy.49954/

I found that I needed to to the following on pve01, pve03 and pve04. I had tried this before I posted the above message, but I must have missed something. On each of the nodes I ran the following commands from the server shell from within the Proxmox webui:

**** REMOVE BAD SSH KEYS FROM NODES ****
ssh-keygen -f /etc/ssh/ssh_known_hosts -R [node name] (example, ssh-keygen -f /etc/ssh/ssh_known_hosts -R pve02)
ssh-keygen -f /etc/ssh/ssh_known_hosts -R [ip address] (example, ssh-keygen -f /etc/ssh/ssh_known_hosts -R 192.168.8.32)

**** ADD NEW SSH KEYS TO NODES ****
ssh -p 22 root@[node name] (example, ssh -p 22 root@ pve02)
ssh -p 22 root@ [ip address] (example, ssh -p 22 root@192.168.8.32)

It seems basic and straight forward, somehow I didn't do it right the first time.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!