[SOLVED] Failed node and recovery in cluster

czechsys

Renowned Member
Nov 18, 2015
480
53
93
Hi,

one of my cluster nodes hard failed due failed disks in raid. Now because cluster is 6.2, we decided to upgrade to 6.4 (needed req for 7). Reinstalled node will have same fqdn as failed node. Now i have two possible ways:

1] remove failed node from cluster (aka cleanup) and add reinstalled
2] recover failed node from backup (install 6.4, recover some config files)

PVE team, whats better way? Every add/remove node is risky for me. But when 2] will by borked, it can break cluster too. I am interested about what files identifying node are needed for recovery in case 2] (full path preferred), as i can think host key, root authorized_keys etc etc etc.

Thanks
 
I remove the failed node first from the cluster, see the documentation (https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_remove_a_cluster_node), also the note at the end about the remaining ssh fingerprints that you should remove.

After that, upgrade the existing cluster to the latest PVE 6.4 version. Then reinstall the now repaired node freshly. Either already with PVE 7 or PVE 6.4. Depending on that, you can upgrade the cluster to PVE 7 before or after adding the reinstalled node to the cluster.
 
@czechsys
So you re-installed 6.2 on failed node and joined cluster?

Need your advice here. I want to keep steps ready if any node fails in my three node proxmox+ceph cluster