Cluster Join failed This host already contains virtual guests

Came across this today - ugh. I would be understanding if perhaps it would check for VMID conflicts prior to joining the cluster (and throw an error if there was a conflict), but flat out refusing to join a cluster if I have a VM is rather annoying. That said I can see there are workarounds so I will certainly give that a go :)
 
there is a lot more than just the VMID ranges - all the config files would need to be merged (and could potentially be conflicting). checking for local guests is just the proxy for "is destroying the local config okay or not" (or "is this is an unused node that can be joined").
 
  • Like
Reactions: Stoiko Ivanov
there is a lot more than just the VMID ranges - all the config files would need to be merged (and could potentially be conflicting). checking for local guests is just the proxy for "is destroying the local config okay or not" (or "is this is an unused node that can be joined").
I hope can join cluster if No vmid conflict but it doesn't seem possible
Because the cluster use 2xxx as VMID
The node to be joined use 3xxx VMID
 
a joining node should be completely empty - all its configuration in /etc/pve will be lost when joining.
 
Foi o que fiz para resolver isso:

- pare todas as VMs no nó de junção
- mova os descritores de VMs em /etc/pve/nodes/[NODENAME]/qemu-server para /home/_cluster_bck
- junte-se ao grupo
- copie os descritores de VMs de backup de volta para /etc/pve/nodes/[NODENAME]/qemu-server

feito =)
Worked for me
 
  • Like
Reactions: zugkebug
No, this won't be changed.
One of the reasons why this limitation exists, is that there can be a conflict of VM/CT IDs.
You could make somekind of ID check and when they don't conflict, allow a node with vm's join a cluster.

I forgot about it, moved vm's around with some down time and now i wanted to setup my cluster. Which is failing, because two of three of my servers thave vm's already.

Now I have to do a alot of extra work with potential risks and down time to solve this.
 
Ouch did delete the 100 to 105 conf files in qemu-server and the 102 in LXC after joining all VM's made are gone, how can I get those files back or recreate the .conf files
 
Did you force join the node?

You should have a `/var/lib/pve-cluster/backup` directory on the node with a gzipped sqlite file. This is a backup of the pmxcfs database that backs /etc/pve. It is created when a node joins a cluster.
You can extract the configs from there using sqlite tooling.
 
i have a perfecty working pve 7.4-3 in production and, in a more updated and better hardware, installed 8.0.3 with the intention of make a cluster with them and move the LXC and VMs to the better hardware...
(I thought that when i got it all working in the new hardware i could convert the old PVE into a PBS...)

I ended up in this thread because i obviously have LXC and VM running on the "old" perfectly working node... So...

What would be the best procedure to achieve what i want?
(Wich i resume / repeat, is: transfer my LXC and VM from an actually working pve 7.4-3 to a freshly installed 8.0.3
 
backup and restore, or, if the systems are clustered together, migration.
 
  • Like
Reactions: amomp3
so, i clustered them and migrated. Now i have all the LXC and VMs in the better hardware cluster...

I faced the quorum problem that i did not know existed when i tried to make a snapshot from a VM i migrated SO i had to turn on the old node and could done it ... Luckily i had not dissasembled yet but this is the plan...

So... i should change the votes needed (i still did not search how to do this) ?
or
i should perform a procedure to uncluster and leave the actual better hardware node alone (i mean no part of any cluster) like i was using the first node ?
I only used the clustering to be able to migrate...
Maybe convert he old one into a PBS (i don't know if there is a procedure to do such thing)?


I think i should need to uncluster but i don't know how to do it...

What would you do?
 
Sorry new to the cluster world. Can only the node which you want to join the cluster not hold vm's? So PVE1 has VM's. PVE2 joins has no VM's. Or must PVE1 and PVE2 be empty and then re-create everything with the backup option?
 
Only the nodes which will join the cluster mustn't have any VMs.
The nodes already in the cluster or the first node which creates the cluster can have vms.

So PVE1 can have VMs, but PVE2 (which joins) has to be empty. If you have different storage layouts, you will have to add them for PVE2 again after joining, as PVE2 will overwrite its own settings with the common settings of the cluster.

The process is described in the documentation, too:
[0] https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_join_node_to_cluster
 
Last edited:
  • Like
Reactions: MvL72
Only the nodes which will join the cluster mustn't have any VMs.
The nodes already in the cluster or the first node which creates the cluster can have vms.

So PVE1 can have VMs, but PVE2 (which joins) has to be empty. If you have different storage layouts, you will have to add them for PVE2 again after joining, as PVE2 will overwrite its own settings with the common settings of the cluster.

The process is described in the documentation, too:
[0] https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_join_node_to_cluster
Thanks for answering my question. Very helpful!
 
root@PROX2:~# ls /etc/pve/qemu-server
root@PROX2:~#
root@PROX2:~# ls /etc/pve/lxc
root@PROX2:~#
root@PROX2:~# ls /etc/pve/nodes/
PROX1 PROX2
root@PROX2:~#

PROX1 is the host of the Cluster I want to join. PROX2 is the "empty" node I want to join to the cluster hosted by PROX1, but I get the subject error message. I thought I had deleted all existing VMs from PROX2, but I still get the error. What is(are) my next step(s)?
 
Test
When you add new VM on PROX2, what is the numer new VM?
VM ID: 102

Also discovered this:
root@PROX2:~# cat /etc/pve/.vmlist
{
"version": 3,
"ids": {
"100": { "node": "PROX1", "type": "qemu", "version": 2 },
"101": { "node": "PROX1", "type": "qemu", "version": 1 }}

}
root@PROX2:~#
 
Last edited:
please post the output of "pvecm status" and "/etc/pve/corosync.conf" of both nodes..
 
please post the output of "pvecm status" and "/etc/pve/corosync.conf" of both nodes..
PROX1:
root@PROX1:~#
root@PROX1:~# pvecm status
Cluster information
-------------------
Name: LOCALNET
Config Version: 1
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Thu Apr 11 19:45:06 2024
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000001
Ring ID: 1.5
Quorate: Yes

Votequorum information
----------------------
Expected votes: 1
Highest expected: 1
Total votes: 1
Quorum: 1
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.48.51 (local)
root@PROX1:~#
root@PROX1:~#
root@PROX1:~# cat /etc/pve/corosync.conf
logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: PROX1
nodeid: 1
quorum_votes: 1
ring0_addr: 192.168.48.51
}
}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: LOCALNET
config_version: 1
interface {
linknumber: 0
}
ip_version: ipv4-6
link_mode: passive
secauth: on
version: 2
}

root@PROX1:~#
root@PROX1:~#


PROX2:
root@PROX2:~#
root@PROX2:~# pvecm status
Error: Corosync config '/etc/pve/corosync.conf' does not exist - is this node part of a cluster?
root@PROX2:~#
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!