Cluster Join failed This host already contains virtual guests

okay. were the two nodes part of a cluster in the past?
 
okay. were the two nodes part of a cluster in the past?
Yes, and I don't know why the cluster fell apart. The PROX2 was offline (powered off) for quite some time (several months), but it eventually got powered back on - maybe this contributed somehow? I didn't need the VM that was on it, so I thought I had properly deleted it, but am getting the message that it isn't deleted and therefore PROX2 cannot be added back to the original Datacenter cluster.
 
Last edited:
could you post "find /etc/pve" for both nodes (and indicate which output is from which node)?
 
okay. I would recommend clearing out the PROX2 dir on PROX1, and the PROX1 dir on PROX2 (if you want to play it safe, you can just move them somewhere outside of /etc/pve), and then retry the join..
 
  • Like
Reactions: BobParr222
okay. I would recommend clearing out the PROX2 dir on PROX1, and the PROX1 dir on PROX2 (if you want to play it safe, you can just move them somewhere outside of /etc/pve), and then retry the join..
That worked! Thank you! I cleared out the folders as directed (by moving them to a /tmp location), ran the join, and it worked!
 
  • Like
Reactions: fabian
I have the same problem and I don't want to delete the VMs on one node and then join and then restore. There should be a plugin or something in Proxmox build to solve this.
 
I have the same problem and I don't want to delete the VMs on one node and then join and then restore. There should be a plugin or something in Proxmox build to solve this.
Unfortunely that's not as easy as it sound like Fabian explained on the first Page of this thread:
it's not that easy - changing guest IDs involves storage operations, storage config entries can also conflict, as could stuff like CPU models, firewall groups, backup jobs - and that's just from the top of my head without actually thinking through all the corner cases that might pop up.

you can already trivially work around this limitation manually if you know what you are doing (backup relevant content from /etc/pve, remove guest configs, join, restore content as needed), but there is so much that can go wrong that this will not be automated. 99% of the time a node is freshly installed, then joined to an existing cluster while empty.


I don't see the problem though: If you have a cluster or another single-node PVE you can move the vms and lxcs to another node/PVE. If you don't have a cluster or other PVE you should think about you availability requirements and buy the needed hardware.
 
Last edited: