Joining a cluster with already created guests VM

andrema2

Member
Dec 7, 2020
28
5
8
52
Hi

I saw a video on youtube stating that we should have no guest VM when joining a cluster. I haven't found any mention of it in the documentation.

Is it really the case ?
 
see: https://pve.proxmox.com/pve-docs/chapter-pvecm.html#pvecm_join_node_to_cluster
A node that is about to be added to the cluster cannot hold any guests. All existing configuration in /etc/pve is overwritten when joining a cluster, since guest IDs could be conflicting. As a workaround create a backup of the guest (vzdump) and restore it as a different ID after the node has been added to the cluster.

meaning your current cluster can have guests defined, but a node, which you add to an existing cluster has to have no guests

I hope this explains it!
 
  • Like
Reactions: dwrighty
I have a work-around that *might* work for you, but has not been thoroughly tested.
There is a firm requirement however that there must not be any conflicts with the guest ID, or the node name.

On node1 (with guests)
Create a new cluster or get join information.

On node2 (with guests)
scp -r /etc/pve/nodes/* to node1:/etc/pve/nodes
rm -r /etc/pve/nodes/*
Join cluster.

Please realize there is potential for things to go sideways!
I've done this to re-assemble a cluster I've recently had to pick apart, and can't provide any details on long-term issues or risk.
I cannot suggest this work-around at the moment for nodes that have never been in a cluster with each other.
I've done this with online VMs! and they remain operational through the process. The join cluster process will overwrite the contents of /etc/pve/nodes with copies from the cluster... so copying your new node directory to the cluster with scp will indirectly restore it on cluster join.

Good luck.
 
I have a work-around that *might* work for you, but has not been thoroughly tested.
There is a firm requirement however that there must not be any conflicts with the guest ID, or the node name.

On node1 (with guests)
Create a new cluster or get join information.

On node2 (with guests)
scp -r /etc/pve/nodes/* to node1:/etc/pve/nodes
rm -r /etc/pve/nodes/*
Join cluster.

Please realize there is potential for things to go sideways!
I've done this to re-assemble a cluster I've recently had to pick apart, and can't provide any details on long-term issues or risk.
I cannot suggest this work-around at the moment for nodes that have never been in a cluster with each other.
I've done this with online VMs! and they remain operational through the process. The join cluster process will overwrite the contents of /etc/pve/nodes with copies from the cluster... so copying your new node directory to the cluster with scp will indirectly restore it on cluster join.

Good luck.
I have tested 2 nodes from a broken cluster and it works.
 
This is limitation is super frustrating.
Why can't the cluster service just check all nodes whether VM IDs are really conflicting and provide the option to change the VM IDs when there are conflicts.
Does proxmox not use UUIDs as VM IDs internally? The VM ID number should only be a display name.
 
This is limitation is super frustrating.
Why can't the cluster service just check all nodes whether VM IDs are really conflicting and provide the option to change the VM IDs when there are conflicts.
Does proxmox not use UUIDs as VM IDs internally? The VM ID number should only be a display name.
Are you faced with a situation you can't work around? The process for this is to add 'fresh' installations to a cluster, and not to join 2 or more pre-existing nodes together.
Personally, it was annoying for my use case. As I had to tear down the cluster and re-assemble without dropping any guests, but I'm willing to bet that's a niche situation. I'm learning ProxMox at this point and did something stupid that required the tear-down.
I'm happy with the process of encouraging adding 'fresh' hosts to a cluster rather than trying to untangle any other dependencies that may be present by trying to incorporate a host with pre-existing VMs.
 
  • Like
Reactions: codingspiderfox
Are you faced with a situation you can't work around? The process for this is to add 'fresh' installations to a cluster, and not to join 2 or more pre-existing nodes together.
Personally, it was annoying for my use case. As I had to tear down the cluster and re-assemble without dropping any guests, but I'm willing to bet that's a niche situation. I'm learning ProxMox at this point and did something stupid that required the tear-down.
I'm happy with the process of encouraging adding 'fresh' hosts to a cluster rather than trying to untangle any other dependencies that may be present by trying to incorporate a host with pre-existing VMs.
Has anyone figured out a workaround to this? The problem I am running into is that my cluster nodes are not physically close to each-other (cross-datacenter) and each node hosts a FortiGate as it's connector to the SD-WAN fabric. I don't want to join the nodes via the internet, but I don't have connectivity between them in a secure manner until the firewalls have been deployed onto the nodes.
 
Has anyone figured out a workaround to this? The problem I am running into is that my cluster nodes are not physically close to each-other (cross-datacenter) and each node hosts a FortiGate as it's connector to the SD-WAN fabric. I don't want to join the nodes via the internet, but I don't have connectivity between them in a secure manner until the firewalls have been deployed onto the nodes.
Unfortunately I couldn't find a reasonable workaround so I ended up having to deploy a temporary host into each colocation that housed the firewall. Rebuild the permanent node, join the cluster and then rebuild the firewall on permanent node. This was a fair amount of additional overhead (cost and effort). I'm hoping somebody can come up with a better way to do this in the future.
 
  • Like
Reactions: GuiltyNL
I have a work-around that *might* work for you, but has not been thoroughly tested.
There is a firm requirement however that there must not be any conflicts with the guest ID, or the node name.

On node1 (with guests)
Create a new cluster or get join information.

On node2 (with guests)
scp -r /etc/pve/nodes/* to node1:/etc/pve/nodes
rm -r /etc/pve/nodes/*
Join cluster.

Please realize there is potential for things to go sideways!
I've done this to re-assemble a cluster I've recently had to pick apart, and can't provide any details on long-term issues or risk.
I cannot suggest this work-around at the moment for nodes that have never been in a cluster with each other.
I've done this with online VMs! and they remain operational through the process. The join cluster process will overwrite the contents of /etc/pve/nodes with copies from the cluster... so copying your new node directory to the cluster with scp will indirectly restore it on cluster join.

Good luck.
Thanks a million

I mistakenly deleted cluster information, and I couldn't find any way to re-enable the cluster. Everything seemed lost. When I did the
Code:
rm -r /etc/pve/nodes/*
all running VMs got lost from the GUI from the node, but when I was able to join to the cluster, they were back. Just in the state they were running and operational

Thanks man
 
I have a work-around that *might* work for you, but has not been thoroughly tested.
There is a firm requirement however that there must not be any conflicts with the guest ID, or the node name.
...
scp -r /etc/pve/nodes/* to node1:/etc/pve/nodes
rm -r /etc/pve/nodes/*
...
I broke my test cluster out of clumsiness...

But with your workaround I fixed it.
Great, thank you very much!
 
Has anyone figured out a workaround to this? The problem I am running into is that my cluster nodes are not physically close to each-other (cross-datacenter) and each node hosts a FortiGate as it's connector to the SD-WAN fabric. I don't want to join the nodes via the internet, but I don't have connectivity between them in a secure manner until the firewalls have been deployed onto the nodes.
Exact same use-case here: new Datacenter with 3 PVE nodes and two HA OPNSense firewalls. I would like to join the existing PVE cluster through the site2site tunnel.

The limitation of PVE not being able to join a cluster when a VM is running creates a chicken and egg problem during bootstrapping of a new environment.
 
Exact same use-case here: new Datacenter with 3 PVE nodes and two HA OPNSense firewalls. I would like to join the existing PVE cluster through the site2site tunnel.

The limitation of PVE not being able to join a cluster when a VM is running creates a chicken and egg problem during bootstrapping of a new environment.
Bad idea:
If your S2S Tunnel is down and you shutdown/start any VM on (The one Node), it wouldn't start because of Expected votes in the Quorum.
You cant even boot the one Proxmox node, that is behind the Tunnel. You will have to set always "pvecm expected 1" on that node to boot any VM while the Tunnel is down.

If your tunnel is created in an Opnsense-VM on that one node, you get an chicken an egg problem xD
So the tunnel needs to get created already on something other as the PVE node, like an edgerouter or something xD

But yeah, you get your node into the Cluster, which is definitively a nice way to have everything in one Cluster.
Especially with Proxmox, where the Cluster means actually nothing, just all in one GUI.
Because you can still define Failover Groups only between some Servers in the Cluster etc... (Which is actually amazing)

But yeah the Quorum thing/expected votes, is the only thing, why im not using a Custer over Tunnels.

Cheers
 
So having multiple PVE clusters that can always form a "local" quorum is the only reliable solution I guess?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!