[SOLVED] pvecm updatecert -f - not working

I ran the commands on each node, then went to the GUI to migrate a vm.
I then tried from the GUI and got a different error.
I then ssh'ed between each host to set up keys again which works as it should. Once done, I could ssh between hosts without passwd.
I tried again from the GUI and got the same error (image below).

Now, I looked at the link and it's all dev talk and not clear what or if there is a solution. This is posted at the end but no more comments so no way to know if this is actually a solution or just testing. The person says 'this worked for me' which to me never means it's the agreed upon solution yet.
All of these hosts are remote to me. I have no way to disconnect them from the network as one person suggested.

As you said, it would be nice if this was a DNS based method rather than hostname/IP.
Still not sure how to get out of this problem now. I badly need to regain access to the cluster as I need to migrate some newly built hosts and other work.

Code:
Manual solution / the following procedure has worked for me:

script "generate_known_hosts.sh"
--------------------------------------------------------
#!/bin/bash

hosts=( \
       'pve1' \
       '10.99.2.1' \
           'fd71::2:1' \
       'pve2' \
       '10.99.2.2' \
           'fd71::2:2' \       
           'pve3' \
       '10.99.2.3' \
           'fd71::2:3' \
       )


for host in ${hosts[@]}; do
    ssh-keyscan -t rsa $host
done
--------------------------------------------------------

# step 1
/etc/pve/priv/authorized keys
compare with /root/.ssh/id_rsa.pub
and leave only the active ones, delete old ones


# step 2
for each host = 3x entries (netbios name, ip_v4, ip_v6)
/root/.ssh/known_hosts
/etc/pve/priv/known_hosts

replace missing entries
reboot each node just to make sure
--------------------------------------------------------
i think the roblem is this behaviour:
1) new ssh keys are added without checking for duplicates.
2) on access first entry match is taking, but wrong because old entry


1699629303353.png
 
You know what, given that it is all messed up for you when it comes to the known_hosts mismatches, there really is not any additional risk in ditching the whole cluster's record of the known_hosts (you can't connect from within GUI across using SSH already anyways) and you are not ditching keys, just the integrity information.

I would do this on the node from which you cannot migrate, so on pro03:

Code:
# rm ~/.ssh/known_hosts
# rm /etc/ssh/ssh_known_hosts
# rm /etc/pve/priv/known_hosts
# pvecm updatecerts
# reboot

This affects where you will be able to connect TO from the pro03, it does not prevent you to SSH into pro03 even from your workstation.

After it restarts:

Connect to the GUI of pro03, get into shell, manually ssh into pro07, accept the fingerprint, let it connect, exit, run once again pvecm updatecerts and then try your migration.
 
Last edited by a moderator:
Failing that, I would instead do a dirty workaround just to get my migrations done:

Append these to your /etc/ssh/ssh_config:

Code:
CheckHostIP no
StrictHostKeyChecking no
 
Wait now, I'm trying to track all this :).

You suggested some tests then suggested I have nothing to lose by trashing the cluster.

I actually cannot migrate anything across any of the hosts, not just one or two.

So, should I provide the info you first wanted in your previous comment or should I go ahead with your last comment suggestions?
 
I hope I'm providing all that you wanted. I used crummy passwords during testing/rebuilding but no idea if those will show up in this following data.
Code:
root@pro01:~# cat /etc/pve/priv/known_hosts
pro07 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlhTIAgqBHqfrNxi8m5W4bgUsT5mDL161VaaYN5IqWb0EaQW2MvrDfUbgCNj873feOoepkUBSL4mrlB++9tVvSq7jipBf4+JyGcNa0q4Usu+w2uVhEAr1cAmePa9VsUY1CvXAnIO8B0b79AFG3kBU0nXvZltu39fKDhGDdRVbVxp1xZf/JI2nGf9r+aPyXXvELd8+grOGfEGPEu2S+Fqkl59rtKRFHDAX1YfRem2bQqPU6kirR0fqM2JPXlnOc18+Kb0pVQRWGe6Lfmt+lFb+1WX7HTJfMvD0lVzrkseyvbDUSoEF4M+jvOZ5NieU/Tc3rXMotP+9b3ORUpoBVqzyiUTgS30mLGUdFnXwg7eRCm7F7yE62PJUx6lVTdtAi1yEObz5gUt244F8I3sB0bH/WY1xeL1lbHauwJAx1flBsg5OBd3nflujAv6HHbphkcfmlQK7kNUJhEXYlacw934uq9naiBHCCnRtuiVMFXjgM83908tzFDim8RmMAXDYcjIM=
10.0.0.76 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlhTIAgqBHqfrNxi8m5W4bgUsT5mDL161VaaYN5IqWb0EaQW2MvrDfUbgCNj873feOoepkUBSL4mrlB++9tVvSq7jipBf4+JyGcNa0q4Usu+w2uVhEAr1cAmePa9VsUY1CvXAnIO8B0b79AFG3kBU0nXvZltu39fKDhGDdRVbVxp1xZf/JI2nGf9r+aPyXXvELd8+grOGfEGPEu2S+Fqkl59rtKRFHDAX1YfRem2bQqPU6kirR0fqM2JPXlnOc18+Kb0pVQRWGe6Lfmt+lFb+1WX7HTJfMvD0lVzrkseyvbDUSoEF4M+jvOZ5NieU/Tc3rXMotP+9b3ORUpoBVqzyiUTgS30mLGUdFnXwg7eRCm7F7yE62PJUx6lVTdtAi1yEObz5gUt244F8I3sB0bH/WY1xeL1lbHauwJAx1flBsg5OBd3nflujAv6HHbphkcfmlQK7kNUJhEXYlacw934uq9naiBHCCnRtuiVMFXjgM83908tzFDim8RmMAXDYcjIM=
pro04 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCy80R7385pUqCDAwv/2gi8KF1KnONjPZGUlw+GkpCxLvrQ6krk/9WMiOf9EUafirRC67tHDlpH1ctXK1lQ0HD8gBzJWPzkGNKpofFeSFq8Wl9Vb2WLDpufG2vNcsH6tHaJAxLznbaK9IpfeLzHcF0wAcVKgg1SVontTezzuYeU3TMfA1ZvPBAtm+N4HTHrGhABVTXwgczTsVJqbh6hJUxpmzbDbAavewaec273fbz6+cL2ncKOQQ9bCmtr9ko2Ba2ZzAbxgJq6HQWYQUOtXUHdFg2HVWkrkshngiDoKY74QA8U34MAr9YIgiYplRCEBQ9zuS07OA+PU33+w68lsA1dbEKzJJIHCGkUYPNIIm1ILy/gIjsYTHffsdsyQNHc9txau6tHTpJPFFVOTVfDxIUGxUKK2Tbp+O05XhNTs9utef5IiMLG7nVtkYuIkq8GLrLrNRDAzGsfDT0m1zGzPQLIcX3vI7ONAJuhYMOHDKSJ6+mYJ992rLwJSupdERzXYi0=
10.0.0.73 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCy80R7385pUqCDAwv/2gi8KF1KnONjPZGUlw+GkpCxLvrQ6krk/9WMiOf9EUafirRC67tHDlpH1ctXK1lQ0HD8gBzJWPzkGNKpofFeSFq8Wl9Vb2WLDpufG2vNcsH6tHaJAxLznbaK9IpfeLzHcF0wAcVKgg1SVontTezzuYeU3TMfA1ZvPBAtm+N4HTHrGhABVTXwgczTsVJqbh6hJUxpmzbDbAavewaec273fbz6+cL2ncKOQQ9bCmtr9ko2Ba2ZzAbxgJq6HQWYQUOtXUHdFg2HVWkrkshngiDoKY74QA8U34MAr9YIgiYplRCEBQ9zuS07OA+PU33+w68lsA1dbEKzJJIHCGkUYPNIIm1ILy/gIjsYTHffsdsyQNHc9txau6tHTpJPFFVOTVfDxIUGxUKK2Tbp+O05XhNTs9utef5IiMLG7nVtkYuIkq8GLrLrNRDAzGsfDT0m1zGzPQLIcX3vI7ONAJuhYMOHDKSJ6+mYJ992rLwJSupdERzXYi0=
pro03 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+JAU8fNg1VxBJmcJR5Jsiz/TnHpAtvlVA3+3kqsVsyHnXt9fje07d7IkEmY9eybbVrIoiMkhjF8QmExotl/izs/nTmsw7Nuaa1VHZiJNDAV+r5LB/4XxnWI3nRVUya+j4qUrHRn5IAMbOWwVgu/uP4pkDQbzhN6+RwQVpg/AkN3zyRZrjbQ0FtBh712T8vax0VtBsjtKd5Ckof/pQFEv3W846B3ZnVKmWOtieE4QdZ3p6AreVt30a5GPUWBvgTgk0U+89GAHwdpYKrXp6CpAtcK3TJdHH7VOb+LTwQuSOp0lelFn5O8eQzuj5/2pKsjSuTYXjAv6zLXFuUdadwukaZzkbLsenwztmBNiRn3lzPbdAjFQB48hL7Y2DasnfHr5WZCpoiyOZ+WZRwdE92nlhiUrhTP0paTP4fAlS3AJnZOF2r6KsqyKpE7tHyCdvzFDBTZ3L4riP1Ij6IVis28ninyNVy0Lw1gTEHhidwUtI5iiG95yt4LL7DBCKbkmzEX8=
10.0.0.72 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+JAU8fNg1VxBJmcJR5Jsiz/TnHpAtvlVA3+3kqsVsyHnXt9fje07d7IkEmY9eybbVrIoiMkhjF8QmExotl/izs/nTmsw7Nuaa1VHZiJNDAV+r5LB/4XxnWI3nRVUya+j4qUrHRn5IAMbOWwVgu/uP4pkDQbzhN6+RwQVpg/AkN3zyRZrjbQ0FtBh712T8vax0VtBsjtKd5Ckof/pQFEv3W846B3ZnVKmWOtieE4QdZ3p6AreVt30a5GPUWBvgTgk0U+89GAHwdpYKrXp6CpAtcK3TJdHH7VOb+LTwQuSOp0lelFn5O8eQzuj5/2pKsjSuTYXjAv6zLXFuUdadwukaZzkbLsenwztmBNiRn3lzPbdAjFQB48hL7Y2DasnfHr5WZCpoiyOZ+WZRwdE92nlhiUrhTP0paTP4fAlS3AJnZOF2r6KsqyKpE7tHyCdvzFDBTZ3L4riP1Ij6IVis28ninyNVy0Lw1gTEHhidwUtI5iiG95yt4LL7DBCKbkmzEX8=
pro02 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD8VrYOozkf/kS1T0+BHkwQcaMaFyuN7aznzP0Nyv5aGWktSQuw9uJ2ur406/2SPCnZ6G+rMcwKoUo1GoAvnbLcXL/gSYjM+uvXPqYfH9z+qMbujQJcqGVJVomdRRTQ8XFAI/+39CViWODa1BT67ZGuGdoaPO8aDyGIrH2AJf8MaeTNo+h44PPNhg1woWqx4tayWOU69kCweg7Q0G/MUssjDsjANrZqz5pAJxIFYnmLwq6QbY6Gvk2IksNXPOaXkGr7eGsuayDd8pdtsA72Z/jVoYABh96qkhk47A8PncxyuUcdJJlKfMxK8bh0lf+eH4a0ANUe3qLAGn1fX9JWKXAsALTwupvw1TS0mQQZ1zC06KMwZDZBE4V2fbW5+TqyUJ8tMOK9lN2P6/5CYcp9fh7m6Ebec1qonoR9Y0mov8ahE/VjoePufXWBrrgywtOy3oM0F/iXnm1Iv9wYgdcbooXBFIjqUf+jE2mrYuJyGnsqqaq6Sryb4GT/MPAakPzaG1E=
10.0.0.71 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD8VrYOozkf/kS1T0+BHkwQcaMaFyuN7aznzP0Nyv5aGWktSQuw9uJ2ur406/2SPCnZ6G+rMcwKoUo1GoAvnbLcXL/gSYjM+uvXPqYfH9z+qMbujQJcqGVJVomdRRTQ8XFAI/+39CViWODa1BT67ZGuGdoaPO8aDyGIrH2AJf8MaeTNo+h44PPNhg1woWqx4tayWOU69kCweg7Q0G/MUssjDsjANrZqz5pAJxIFYnmLwq6QbY6Gvk2IksNXPOaXkGr7eGsuayDd8pdtsA72Z/jVoYABh96qkhk47A8PncxyuUcdJJlKfMxK8bh0lf+eH4a0ANUe3qLAGn1fX9JWKXAsALTwupvw1TS0mQQZ1zC06KMwZDZBE4V2fbW5+TqyUJ8tMOK9lN2P6/5CYcp9fh7m6Ebec1qonoR9Y0mov8ahE/VjoePufXWBrrgywtOy3oM0F/iXnm1Iv9wYgdcbooXBFIjqUf+jE2mrYuJyGnsqqaq6Sryb4GT/MPAakPzaG1E=
pro01 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChLypT9WYMaInraCa9LdmRw8KUY8CI7PAzgOIOD4x1RLqL8x497a/qm6WdAFnqcloEZXUOn2NRASuqxHj8LB9/k7En1WRgLdFEDBiF10FEjfkWY+5AFPr4fywpRLVjZalCfJNHd1UbPPYH7C0YUL20iqxtMo0XcViY939Ue2e7zRj4LNWG4hTSetDRaE9QVgIzKVYnnFO79Hyi8JIeuS5+TomXw2RQhNsz9m6WAoL6UTAe7fA5h6RdIS/uzQKzSGdnqlii0ntqGyw8FiHk5mJ9yKVOQUPjKeNYFV5LbJ/7zinAUUJEcltGk/ivguy1xHXNrPT6bWvnEs1en3mw/97lG7/9zGhe6JRC2HbwE8GyFCB9xE+p1yTNbrEuuuOaNLyJ+9pTC2du+Vt0zV1FHJDK2u7aPomm6dAurqT6VIMs4xSGAYhPqhvUeXqQDVPa24DDLbQaQnFXg/Kg8WKibReLMzsOnQknBNFoxWiV5IqWVOm+WL6VIeTOULwAqAksvvE=
10.0.0.70 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChLypT9WYMaInraCa9LdmRw8KUY8CI7PAzgOIOD4x1RLqL8x497a/qm6WdAFnqcloEZXUOn2NRASuqxHj8LB9/k7En1WRgLdFEDBiF10FEjfkWY+5AFPr4fywpRLVjZalCfJNHd1UbPPYH7C0YUL20iqxtMo0XcViY939Ue2e7zRj4LNWG4hTSetDRaE9QVgIzKVYnnFO79Hyi8JIeuS5+TomXw2RQhNsz9m6WAoL6UTAe7fA5h6RdIS/uzQKzSGdnqlii0ntqGyw8FiHk5mJ9yKVOQUPjKeNYFV5LbJ/7zinAUUJEcltGk/ivguy1xHXNrPT6bWvnEs1en3mw/97lG7/9zGhe6JRC2HbwE8GyFCB9xE+p1yTNbrEuuuOaNLyJ+9pTC2du+Vt0zV1FHJDK2u7aPomm6dAurqT6VIMs4xSGAYhPqhvUeXqQDVPa24DDLbQaQnFXg/Kg8WKibReLMzsOnQknBNFoxWiV5IqWVOm+WL6VIeTOULwAqAksvvE=

root@pro03:~# ls -la /etc/ssh/
total 143
drwxr-xr-x  4 root root     15 Nov 10 08:13 .
drwxr-xr-x 92 root root    184 Sep  1 12:33 ..
-rw-r--r--  1 root root 573928 Feb  8  2023 moduli
-rw-r--r--  1 root root   1650 Feb  8  2023 ssh_config
drwxr-xr-x  2 root root      2 Feb  8  2023 ssh_config.d
-rw-r--r--  1 root root   3208 Jun 25 07:48 sshd_config
drwxr-xr-x  2 root root      2 Feb  8  2023 sshd_config.d
-rw-------  1 root root    505 Jun 25 07:47 ssh_host_ecdsa_key
-rw-r--r--  1 root root    172 Jun 25 07:47 ssh_host_ecdsa_key.pub
-rw-------  1 root root    399 Jun 25 07:47 ssh_host_ed25519_key
-rw-r--r--  1 root root     92 Jun 25 07:47 ssh_host_ed25519_key.pub
-rw-------  1 root root   2590 Jun 25 07:47 ssh_host_rsa_key
-rw-r--r--  1 root root    564 Jun 25 07:47 ssh_host_rsa_key.pub
-rw-------  1 root root   5047 Nov  1 16:20 ssh_known_hosts.bak
lrwxrwxrwx  1 root root     25 Nov  1 15:45 ssh_known_hosts.old -> /etc/pve/priv/known_hosts

root@pro03:~# cat /etc/pve/priv/known_hosts

root@pro03:~# cat /etc/ssh/ssh_known_hosts
cat: /etc/ssh/ssh_known_hosts: No such file or directory

Didn't we delete this somewhere along the post?

root@pro03:~# cat ~/.ssh/known_hosts
|1|gJB7f+MhRvc/tdDilRCmq6E2JIo=|m3cVjKlKiwmHt8rrahHPHKoyTNc= ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGJbGTIDbFTaWuaHzR8yeafOohHhjAq5FmOsv5Kj9uvK
|1|od67NCa0VilK+NUJmLM94X0jl/U=|mHQUW0Bi0wKy5E9pFeJeg08yQyY= ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChLypT9WYMaInraCa9LdmRw8KUY8CI7PAzgOIOD4x1RLqL8x497a/qm6WdAFnqcloEZXUOn2NRASuqxHj8LB9/k7En1WRgLdFEDBiF10FEjfkWY+5AFPr4fywpRLVjZalCfJNHd1UbPPYH7C0YUL20iqxtMo0XcViY939Ue2e7zRj4LNWG4hTSetDRaE9QVgIzKVYnnFO79Hyi8JIeuS5+TomXw2RQhNsz9m6WAoL6UTAe7fA5h6RdIS/uzQKzSGdnqlii0ntqGyw8FiHk5mJ9yKVOQUPjKeNYFV5LbJ/7zinAUUJEcltGk/ivguy1xHXNrPT6bWvnEs1en3mw/97lG7/9zGhe6JRC2HbwE8GyFCB9xE+p1yTNbrEuuuOaNLyJ+9pTC2du+Vt0zV1FHJDK2u7aPomm6dAurqT6VIMs4xSGAYhPqhvUeXqQDVPa24DDLbQaQnFXg/Kg8WKibReLMzsOnQknBNFoxWiV5IqWVOm+WL6VIeTOULwAqAksvvE=
|1|3qOVN1xvgFdAM9Lf/+scpg2PU0c=|I6VPm57UviRJNCJFSx+4S7VAlbw= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGF/lak6nSWgTGXvPXtp6EQFOvkG+afS0c5uPnb6jH0By74gP4epij5GGcntKQDCuQfI0LBLUVvi9Tndb2ztGCE=
|1|lYW7ZSxut8ns/EVvoHQ2kx6HJHg=|9lMpqhMUfmP7VBOljWSEiIkpYtw= ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKXngw7UOgyBWrT4tRwR7w8NW7lAI+90guIkBNMVxeFs
|1|g3+Wa6EfA2ePz1HLdB8pQ/QSJNM=|9WecbKuQtMpTAvN+8VjxdhF7JC4= ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoGEtXJJk0Ex0lUZvvlmxMDbId6O522ABB5zm0LyjoNF8nPWBP/D/EguYj0ZXC9C+Aeb1e3bWAGxyPGuBdTmlpo5aHGmViZj0c5Xhz6htOaKhfCJG8Aom7hQmeuplEynEcYH5xQARrJmgRVITOthDr0Z4g2RxLALSDAB4Uwkgrzz7AcPJGPOKmE0/sfF5sO69/8wu9YQzKs9dzcZrU/cYhh3noaNoAE5qjPylddx+b6aSPmdHLIFwSjrWxN/BBDVizjGXnMCAZ5KqoFv74DsR2y8WZC2Razebj1Eqaa1T9NKCg5h9CNl/GVN0FMbJLWLppHwc/O0nUndOQ4sx3UMyKcqYuFBkz9+/F6YAnKLwFOhSZaxbt30bJ+SdUj5sixyICtVraSr+3MF0pK+Kj5lkA4uf4QYfdhpLfR3sg3CJ4T1xzMA1QzyjLw3Co2rRDap91ljNkDcZEBl950CsqG3B+9WNLhPO/uQs66zEuM9RUQTU652RRcIT0Q65/87quMec=
|1|0fv2bbsPIShoNBqrErJYdvZ3BJw=|3nSchngs2Yy3jHVqS6yNg7+YCl0= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFCQQuBJfKXy9EbEW8cDVo9J8v5Y/7M6wRQrgd1Y27ga8jweeSfXmwiNN5S7i8j92tMp9oNkPOGGyJQjmihcqxI=
|1|qesH+IiA+4uZsyZgSjTj+wmrsjU=|QvklcoW5uRqO1Glx+TfT0sXXeVM= ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPVGLXy04ZvytqU1dEeEJsfQPBUEgMHCDXbujzyQTQil
|1|2l4nI3plhHosY8WBfez6eFwp13c=|WPCut7sTkytXluYbi4rGHuZiLyI= ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFBGfe+9rT1G1D0od7I+UPTuxwZUqf82GKd6aeb+ujyRnwkYf72fSsWJlu9PxhTw+pmlT2oP3RVOKA+uros5AEqu/GqVnU6XgbQ7yLNSUNIjQkXAkxpSnOiXHYRECuyGozmn0y7pQNzGC02yPDHv+f91WH223C0PXZnkj0TGSm0GfLDCM0nnuy2saTHi+1rzNDxOrCWsF6+J63/9Mj+Y1M1sSUY5qUdkaLRKQFlnkikP2stdBJ2vIgQFLke+ItyO3S7q1wohLHASBB5BQDsWsbznDRnDwGdQtZ4byHIG5egaEqC0jL8XZgC0A0Qt+tOfPzqhR3pNdDWHyPoqUeAp7IDAtRRQXi4+kykf3hE9aA+OnhSzhEO5uUVkZ0kc3Ix/cly56hpMKPTVPgQR0IeAkCWQ9n1xBA1lI+kekTOB+ein+Regr8LcnlyH7+Ol+X2RgEKzcXO7DzXjXh2/0kjCoqwTPMlX5XFdcFG9rv0EZPAAykC/Qiqurm9z3uO8a/xyU=
|1|GZP2SL2w7R0jWJ4Rv6yQWe5LEuQ=|pYegcAIiA7o8VMTA9jUxKXx1qy8= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCx9hc2AkUmq1iezPBQfUuf//9vlgg4THX41tZc8V0sfyy8HuFGz+97HoOYqQqVtngzA4G9nkKhDPw1q1CrVM2g=
|1|TGEDc9UJKd/fKMOX0RVcVPYDsGI=|0ZCJVL4ZKbpPcELWx2oPN/ANvqo= ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE7Zm/o9qGt2wW/I9vIy1DvVZlOTnWSRxc91oN3XZzPE
|1|MqU9Yt2OqyrruxWWa/LztnrLLcw=|qEG2ntTzuingwFIX5ViDIK9qkbE= ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvqCh0wCzXxDjdt3k4K53Vu8fKVmzE/PwtfVQ06LSXAl5DUDqfS03438kE+NmPY1ojoOoXmMaziXs+S5mEzhi/k/nrgHoqCf5aMHD91rE3yfshndSGnAgTkpr0qKOe/IQWCXGtxxCIcmScHvQ9CtvehGGZea3N4VtOb9yITRA91JbOe6BJDnrayqI89GZ/5mwyDTQ5gZBhWKRsECONEJKjpNe3GXYuw839J03l6rfuyM1FOz79vlS5kuxhHupvzMYLAF8+0XvaaWdJ2siFhKAWTXWEghkf/LcWLYMY7p2uN0pGwcTHj4ZtSU7sAR1OEC73vt8+sax7EX9lEc8E3/8y2pmUnT3362nzVCmAcoTvfMQbiHW253IoXOiD+nWHvEbZ60lNJYDbSvJw23rih5wSHQd/Nn/8MytYce1b47vQi2xIS7EysQZKdjYvtVGZCYI9nht23Y5pKign6RKwQaSRgSA+5S8U086pTfkAgtIWvCC8LQWTIRITyR6kTcwWdUc=
|1|zkjtyV2N96g7VoBx++pV6dfy8Wo=|LW4yOs8fM88XwmtNq9nVKP0nN1s= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEYOqKMvYTSy3s25Cm7gfd/HSgEM2+xr2fMh5h0La+tFRq9UyLtrlGavIeN7YLt6fhcy/4L52lGf2Y3lygD/G0M=
 
Sorry, overloaded with work but needing to solve this too.

Using "cat /etc/pve/priv/known_hosts" on all nodes shows no results.

This is the other output from each node;

Code:
root@pro01:~# ls -la /etc/ssh/
total 143
drwxr-xr-x  4 root root     15 Nov 10 08:13 .
drwxr-xr-x 92 root root    184 Sep  1 12:24 ..
-rw-r--r--  1 root root 573928 Feb  8  2023 moduli
-rw-r--r--  1 root root   1650 Feb  8  2023 ssh_config
drwxr-xr-x  2 root root      2 Feb  8  2023 ssh_config.d
-rw-r--r--  1 root root   3208 Jun 23 18:46 sshd_config
drwxr-xr-x  2 root root      2 Feb  8  2023 sshd_config.d
-rw-------  1 root root    505 Jun 23 18:46 ssh_host_ecdsa_key
-rw-r--r--  1 root root    172 Jun 23 18:46 ssh_host_ecdsa_key.pub
-rw-------  1 root root    399 Jun 23 18:46 ssh_host_ed25519_key
-rw-r--r--  1 root root     92 Jun 23 18:46 ssh_host_ed25519_key.pub
-rw-------  1 root root   2590 Jun 23 18:46 ssh_host_rsa_key
-rw-r--r--  1 root root    564 Jun 23 18:46 ssh_host_rsa_key.pub
-rw-------  1 root root   5047 Nov  1 16:18 ssh_known_hosts.bak
lrwxrwxrwx  1 root root     25 Nov  1 16:03 ssh_known_hosts.old -> /etc/pve/priv/known_hosts

root@pro02:~# ls -la /etc/ssh/
total 143
drwxr-xr-x  4 root root     15 Nov 10 08:13 .
drwxr-xr-x 92 root root    184 Sep  1 12:32 ..
-rw-r--r--  1 root root 573928 Feb  8  2023 moduli
-rw-r--r--  1 root root   1650 Feb  8  2023 ssh_config
drwxr-xr-x  2 root root      2 Feb  8  2023 ssh_config.d
-rw-r--r--  1 root root   3208 Jun 24 10:44 sshd_config
drwxr-xr-x  2 root root      2 Feb  8  2023 sshd_config.d
-rw-------  1 root root    505 Jun 24 10:43 ssh_host_ecdsa_key
-rw-r--r--  1 root root    172 Jun 24 10:43 ssh_host_ecdsa_key.pub
-rw-------  1 root root    399 Jun 24 10:43 ssh_host_ed25519_key
-rw-r--r--  1 root root     92 Jun 24 10:43 ssh_host_ed25519_key.pub
-rw-------  1 root root   2590 Jun 24 10:43 ssh_host_rsa_key
-rw-r--r--  1 root root    564 Jun 24 10:43 ssh_host_rsa_key.pub
-rw-------  1 root root   5047 Nov  1 16:19 ssh_known_hosts.bak
lrwxrwxrwx  1 root root     25 Nov  1 15:54 ssh_known_hosts.old -> /etc/pve/priv                         /known_hosts

root@pro03:~# ls -la /etc/ssh/
total 143
drwxr-xr-x  4 root root     15 Nov 10 08:13 .
drwxr-xr-x 92 root root    184 Sep  1 12:33 ..
-rw-r--r--  1 root root 573928 Feb  8  2023 moduli
-rw-r--r--  1 root root   1650 Feb  8  2023 ssh_config
drwxr-xr-x  2 root root      2 Feb  8  2023 ssh_config.d
-rw-r--r--  1 root root   3208 Jun 25 07:48 sshd_config
drwxr-xr-x  2 root root      2 Feb  8  2023 sshd_config.d
-rw-------  1 root root    505 Jun 25 07:47 ssh_host_ecdsa_key
-rw-r--r--  1 root root    172 Jun 25 07:47 ssh_host_ecdsa_key.pub
-rw-------  1 root root    399 Jun 25 07:47 ssh_host_ed25519_key
-rw-r--r--  1 root root     92 Jun 25 07:47 ssh_host_ed25519_key.pub
-rw-------  1 root root   2590 Jun 25 07:47 ssh_host_rsa_key
-rw-r--r--  1 root root    564 Jun 25 07:47 ssh_host_rsa_key.pub
-rw-------  1 root root   5047 Nov  1 16:20 ssh_known_hosts.bak
lrwxrwxrwx  1 root root     25 Nov  1 15:45 ssh_known_hosts.old -> /etc/pve/priv/known_hosts

root@pro04:~# ls -la /etc/ssh/
total 143
drwxr-xr-x  4 root root     15 Nov 10 08:13 .
drwxr-xr-x 92 root root    184 Sep  1 12:35 ..
-rw-r--r--  1 root root 573928 Feb  8  2023 moduli
-rw-r--r--  1 root root   1650 Feb  8  2023 ssh_config
drwxr-xr-x  2 root root      2 Feb  8  2023 ssh_config.d
-rw-r--r--  1 root root   3208 Jun 25 19:30 sshd_config
drwxr-xr-x  2 root root      2 Feb  8  2023 sshd_config.d
-rw-------  1 root root    505 Jun 25 19:29 ssh_host_ecdsa_key
-rw-r--r--  1 root root    172 Jun 25 19:29 ssh_host_ecdsa_key.pub
-rw-------  1 root root    399 Jun 25 19:29 ssh_host_ed25519_key
-rw-r--r--  1 root root     92 Jun 25 19:29 ssh_host_ed25519_key.pub
-rw-------  1 root root   2590 Jun 25 19:29 ssh_host_rsa_key
-rw-r--r--  1 root root    564 Jun 25 19:29 ssh_host_rsa_key.pub
-rw-------  1 root root   5047 Nov  1 16:20 ssh_known_hosts.bak
lrwxrwxrwx  1 root root     25 Nov  1 15:45 ssh_known_hosts.old -> /etc/pve/priv/known_hosts

root@pro07:~# ls -la /etc/ssh/
total 100
drwxr-xr-x  4 root root     15 Nov 10 08:13 .
drwxr-xr-x 92 root root    184 Sep  1 12:36 ..
-rw-r--r--  1 root root 573928 Feb  8  2023 moduli
-rw-r--r--  1 root root   1650 Feb  8  2023 ssh_config
drwxr-xr-x  2 root root      2 Feb  8  2023 ssh_config.d
-rw-r--r--  1 root root   3208 Jun 26 18:43 sshd_config
drwxr-xr-x  2 root root      2 Feb  8  2023 sshd_config.d
-rw-------  1 root root    505 Jun 26 18:42 ssh_host_ecdsa_key
-rw-r--r--  1 root root    172 Jun 26 18:42 ssh_host_ecdsa_key.pub
-rw-------  1 root root    399 Jun 26 18:42 ssh_host_ed25519_key
-rw-r--r--  1 root root     92 Jun 26 18:42 ssh_host_ed25519_key.pub
-rw-------  1 root root   2590 Jun 26 18:42 ssh_host_rsa_key
-rw-r--r--  1 root root    564 Jun 26 18:42 ssh_host_rsa_key.pub
-rw-------  1 root root   5047 Nov  3 09:24 ssh_known_hosts.bak
lrwxrwxrwx  1 root root     25 Nov  1 15:45 ssh_known_hosts.old -> /etc/pve/priv/known_hosts

I feel like I'm missing some things you're asking for.
 
Let's do this.

1) First of all, for any of the following, do not SSH across nodes or via GUI at all, SSH in from your machine directly.

It's okay if some rms result in nothing to be deleted output.

2) SSH to each node individually, so ssh root@10.0.0.{70,71,72,73,76} one connection at a time from your station and do:
Code:
rm -rf ~/.ssh/known_hosts
rm -rf /etc/ssh/ssh_known_hosts
exit

3) Now pick one node, say .70, SSH in and:
Code:
rm -rf /etc/pve/priv/known_hosts
pvecm updatecerts
exit

4) Now SSH into every remaining node {71-73, 76} and:
Code:
pvecm updatecerts
exit

5) Now load GUI of any node and see.
 
Last edited by a moderator:
I have five separate ssh sessions open to the nodes, from a remote windows server I use to maintain this network.
I did the following;

Code:
root@pro01:~# rm -rf ~/.ssh/known_hosts
root@pro01:~# rm -rf /etc/ssh/ssh_known_hosts
root@pro01:~# rm -rf /etc/pve/priv/known_hosts
root@pro01:~# pvecm updatecerts
(re)generate node files
merge authorized SSH keys and known hosts

root@pro02:~# pvecm updatecerts
(re)generate node files
merge authorized SSH keys and known hosts

root@pro03:~# pvecm updatecerts
(re)generate node files
merge authorized SSH keys and known hosts

root@pro04:~# pvecm updatecerts
(re)generate node files
merge authorized SSH keys and known hosts

root@pro07:~# pvecm updatecerts
(re)generate node files
merge authorized SSH keys and known hosts

I then went to pro01 and was able to migrate a vm off of pro03 to pro07.
Very exited to see that it's working again :).

Now, should I test another migration from another node using the GUI once this one is done?
 
Migrating works again so I assume everything else does as well :).
So, my question is... if this happens again, do I just need to do the above sequence of commands?
And of course, I'll be more careful about the naming/IP conventions until I read that there is a new solution which I hope happens.

I cannot even begin to thank you for your help. I've had to move vms for well over a month now!
 
Migrating works again so I assume everything else does as well :).

Glad to hear!

So, my question is... if this happens again, do I just need to do the above sequence of commands?

Very much so, but I would really stick to the order and not try to shortcut it (while I am logged in here let's do this 2 steps ahead already). What it does is that it basically first wipes any local (even leftovers after ssh-keygen -R) on every node, then it wipes out the cluster's common known_hosts and then asks for every node it's own key into (by then empty) cluster's known_hosts. It had to be done from every node because at that point we wiped all the shared locations with accumulated keys so each node really only knows its own key. So they all get added up one by one. There you have it, proper cluster's known_hosts file.

And of course, I'll be more careful about the naming/IP conventions until I read that there is a new solution which I hope happens.
I do not know really ...
https://bugzilla.proxmox.com/show_bug.cgi?id=4252
https://bugzilla.proxmox.com/show_bug.cgi?id=4886
https://bugzilla.proxmox.com/show_bug.cgi?id=4670

I cannot even begin to thank you for your help. I've had to move vms for well over a month now!
You are welcome. I basically wanted to figure it all out, I think I did. There's still something broken with how new nodes are added. Basically it is unfortunately safer to never reuse names/IPs. If a node is removed, it's removed. Better put in a fresh install one. Even then, do not name it the same as some historical node in the cluster, nor have it use the previously used IP. Yeah, it sucks. I do not know why there wasn't more people rant about this.
 
I will definitely not to that mistake again and will also remember to power off a node before removing from cluster.
I didn't realize I missed a step. That's the problem with so many comments, it's too easy to lose track.
If you feel I should re-do the steps, I'm happy to do so.
 
I will definitely not to that mistake again and will also remember to power off a node before removing from cluster.
I didn't realize I missed a step. That's the problem with so many comments, it's too easy to lose track.
If you feel I should re-do the steps, I'm happy to do so.

I don't think it's necessary (given what I saw from your previous outputs). Just if you had an issue in the future, the procedure basically is summed up in this one post (it was written basically to work no matter what, only thing variable is of course your nodes):
https://forum.proxmox.com/threads/pvecm-updatecert-f-not-working.135812/page-2#post-604699
 
Yes, I like to let others know when something was solved and certainly will do that. I"ll also refer to it and re-read everything.

Again, I really appreciate your help and determination to get to the bottom of the problem, mostly me, but with a slight bug or potential improvements :).
 
Hey! So this is a WONTFIX one: https://bugzilla.proxmox.com/show_bug.cgi?id=4252

Not being salty, just if someone starts googling like me spending 3 weeks on something at least they might hit this thread and save themselves the effort. If it was a nichce scenario for the two of us, I guess no problem, then no one finds it.
 
Hey! So this is a WONTFIX one: https://bugzilla.proxmox.com/show_bug.cgi?id=4252

Not being salty, just if someone starts googling like me spending 3 weeks on something at least they might hit this thread and save themselves the effort. If it was a nichce scenario for the two of us, I guess no problem, then no one finds it.
What happened to your account? Did someone delete this persons account for mentioning the above? This person helped me get things working again.
 
What happened to your account? Did someone delete this persons account for mentioning the above? This person helped me get things working again.
Hey Proximate, it is me, I just wanted to put you at ease, I myself asked for the account deletion to wipe my profile data which otherwise could not be done. It was also me who deleted some of my posts in the thread of ours early on to not confuse others with steps that did not work. I will not keep permanent account on the forum, if I come back in the future to contribute, it will be from an ephemeral one.

I followed up in the bugreport itself, as you can see no censorship in there - saying just so not to bash anyone.

Anyone who feels it's worthy getting the bug higher priority, please +1 (and Cc yourselves) in the bugreport(s) later on - they are 3 related:
https://bugzilla.proxmox.com/show_bug.cgi?id=4252 (keygen linking)
https://bugzilla.proxmox.com/show_bug.cgi?id=4886 (stale known_hosts)
https://bugzilla.proxmox.com/show_bug.cgi?id=4670 (stale authorized_keys)

Maybe it gets higher priority otherwise it's going to be considered a "niche case" and not worth looking into.
 
You are an interesting and obviously kind person :).
I wondered if I was going nuts as I thought there seemed to be missing comments.
I captured the entire thread and was going to try to consolidate it for anyone that found it but you beat me to it.

Thank you for all this help!
 
You are an interesting and obviously kind person :).
I wondered if I was going nuts as I thought there seemed to be missing comments.
I captured the entire thread and was going to try to consolidate it for anyone that found it but you beat me to it.

Thank you for all this help!

Hey, minimal patch available (see attachment) at https://bugzilla.proxmox.com/show_bug.cgi?id=4886#c25.

Check your pveversion if you wanted to test it in-place, best make a file backup before. It should prevent pvecm updatecerts from loosing your SSH keys going forward (duplicates are not a problem, though you can weed them out as you wish and then just call pvecm updatecerts ad inifinitum).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!