Joining to existing cluster - broken config

fir3wall90

New Member
Dec 9, 2025
1
0
1
Hello,

My first post, so hello everyone!

For years was using a trick for joining an existing node to a cluster via
1) Make sure node IDs are not conflicting
2) copy /etc/nodes/<node_name> from what I want to be a new member to my existing member
3) rm /etc/nodes/<node_name> from new member, join cluster
4) all sync and work fine

I know it's not a perfect or supported solution, but it was the only one due to some machine disk requirements (we could not back up, rebuild, join, or re-add).

However, today I hit a problem where I forgot about STEP 1 . This means I landed in a situation:
* I have cluster with 2 nodes
* I can see VMs and manage them on my existing member (1st node)
* I cannot see any VMs or manage them on my new member - BUT VMS are running

I have no problem to SSH/RDP to any of my VMs, disks are still there, but I simply cannot manage them.

Does anyone have any idea how to resolve the current situation?

Key info:
* I have copy of /etc/nodes/new_node from my new_node (so got all of config files under qemu-server etc)
* Disks on my new_node ale LVM disks so I can see them under lvs

I'm assuming if I will reboot my new_node then all will go to hell. Is there is any way to leave cluster now, copy my old files and get to a state from before joining to cluster without loosing any data?

Thanks
D