Is it possible to split off a piece of a proxmox cluster, and create a new cluster from it?

copec

New Member
Jun 24, 2023
18
3
3
I can think of a couple of ways to do it manually, but I wonder if there is a known procedure?
 
qm remote-migrate is deemed experimental ... but probably cleanest.

The other options would be to literally move all what you need to one node and then separate it from a cluster as per [1] "Separate a node without reinstalling", then start new cluster from it - will leave some skeletons behind in the wardrobe you might not like later, but it does work.

Remaining manual options (splitting quorum and keeping some nodes in one cluster and others in another) will be more messy still.

[1] https://pve.proxmox.com/wiki/Cluster_Manager
 
  • Like
Reactions: copec
qm remote-migrate is deemed experimental ... but probably cleanest.

The other options would be to literally move all what you need to one node and then separate it from a cluster as per [1] "Separate a node without reinstalling", then start new cluster from it - will leave some skeletons behind in the wardrobe you might not like later, but it does work.

Remaining manual options (splitting quorum and keeping some nodes in one cluster and others in another) will be more messy still.

[1] https://pve.proxmox.com/wiki/Cluster_Manager

Actually I'd like to take the opportunity - if he forgives me - to ask @fabian here about the remote-migrate as I remember having been told it's a "preview", but is there anything knowingly with rough edges (or is it just that the syntax might change going forward so that people do not make scripts?) as it should be the best way to move stuff from one place to another without all the potential after-effects.
 
it mainly has some limitations that we want to sort out before marking it as regular feature
- some storage plugins lacking import/export
- CPU difference handling
- VLAN handling

and of course, like you said, as long as it is marked as experimental we are still "allowed" to revamp the interface/parameters/names and expect users to adapt ;)
 
  • Like
Reactions: esi_y
and of course, like you said, as long as it is marked as experimental we are still "allowed" to revamp the interface/parameters/names and expect users to adapt ;)
This is not really related, but we have a PVE cluster setup on a bunch of servers that are in pairs to multipath sas jbods (It was inherited that way). We just split the drives up so half go to one server and half to another for each pair.

If a server totally died we can import the zpool on the other node and manually copy the virtual machine config files over under /etc/pve/, but I figured I would ask while you are paying attention to this thread if there is an inline way to do that in PVE?

I was thinking of just integrating it into a hookscript, but it sort of implies a line of questioning about cluster resources and locking - Is any external access to lock resources in a PVE cluster outside of the locking of VMs/CTs to the respective node?
 
well, obviously you can script it all (I'd only use direct operations on /etc/pve for things which are not exposed, like "stealing" another node's guest config, and go with API/pct/qm/.. for the rest), but I don't think that's really easily integratable into PVE itself as a first party feature, it's too specific for your particular setup.
 
  • Like
Reactions: copec
well, obviously you can script it all (I'd only use direct operations on /etc/pve for things which are not exposed, like "stealing" another node's guest config, and go with API/pct/qm/.. for the rest), but I don't think that's really easily integratable into PVE itself as a first party feature, it's too specific for your particular setup.

Many people have classically used this particular setup for HA (multi-host SCSI/SAS/FC/iSCSI/etc.) It could obviously be abstracted with hardware in the middle and then all the PVE nodes could just use the iSCSI/ZFS storage backend, but that adds a whole layer of complexity and latency that really isn't necessary IMO. If a zpool were a first class resource that a VM/CT could depend on, and it were locked to a specific node just like the VM/CTs that had a dependancy on it, it would be pretty straight forward following similar logic. In any regards, I really appreciate the product and the support you guys offer on these forums, It's been great!

It seems that crm/pacemaker works in parallel fine with the pve stuff on top of corosync in my test cluster. So I'll experiement with that. Thanks again for the replies.
 
you might be able to implement it using a custom storage plugin (requires perl knowledge ;)), then our HA stack could probably do the recovery for you.
 
  • Like
Reactions: copec

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!