Cluster on wildly different hardware questions

Dec 8, 2022
61
4
8
I'm considering taking my three node setup and clustering them all together but I have some questions as this isn't an ideal setup with three identical nodes. To start I understand that I likely won't get the benefit of HA, which is fine as I have so many VMs that use hardware passthroughs so HA on those wouldn't be an option anyway. I'm mainly looking at clustering to have the advantage of one datacenter and a single frontend to see all nodes. Also, the ability to migrate some VMs might be useful. With that out of the way here are my nodes roughly:

1. 32c/64t Threadripper, 256GB RAM, mirrored boot drive, 1TB mirrored VM storage on NVMe named "VMs"
2. 16c/32t Ryzen, 64GB RAM, single LVM boot drive, 1TB mirrored VM storage on SSD named "VMs"
3. 4c/8t Intel NUC, 32GB RAM, singe LVM boot drive, 500GB single LVM VM storage on NVMe named "VMs"

If I wanted to cluster these three together, I imagine there would be a conflict in the datacenter because of the VM storage devices currently sharing the same name but not being the same. Some questions/possible solutions I think work:

1. Is there any problem with the different boot devices or does the cluster not care? By default they all get named local.
2. I could redo the VM storage on nodes 2 and 3 so they have unique names, such as VMs, VMsB, VMsC. This would potentially solve the overlap problem?
3. I could convert the VM storage on node 3 to a single disk ZFS pool. All nodes would have a ZFS pool named VMs, but one node would have one with half the size.

I feel like I'm missing something obvious here that will make things not work or complicated. What is the best way to handle the clustering for these machines (if any at all?) Do you think it best for me to just maintain the nodes separate from each other?
 
I likely won't get the benefit of HA
If you just use kvm64 as cputype in your vms, HA/migration works, even between AMD and Intel Hosts.

1. Different boot devices are no problem
2. If you want live migration (and not shutdown on A, reboot on B), your vms need to be on a clustered filesystem. Ceph, for the best example.

What is the best way to handle the clustering for these machines (if any at all?) Do you think it best for me to just maintain the nodes separate from each other?
This depends on what you want and what you expect from clustering. If you have mostly VMs with passthrough hardware, then you wont see any benefit, that's right.
I'm mainly looking at clustering to have the advantage of one datacenter and a single frontend to see all nodes. Also, the ability to migrate some VMs might be useful.
Then as said, you need a clustered filesystem (for live migration). Ideally the same amount of disks and space on all hosts.
For offline migration you may get away with what you have now, but I don't know, if a conversion is possible. Personally, I always prefer fresh installs and then embed the vms into the cluster from a backup source. That said, before you try/playing around, backup your vms, because clustering doesn't allow existing VM-IDs twice.
 
  • Like
Reactions: tomachi
Hey, thanks for the response. Ceph is something I've seen mentioned so many times but I really don't know much at all about it. Perhaps it is time I educate myself.

With that said, In the use cases for me, at least for now, I'd be okay with having the VM shutdown, then copied to the other node, then booted up. Uptime on those VMs is not as crucial in those moments.

With that in mind, are there any problems with my concerns of the existing storage I have setup, needing unique names for the VM storage drives, or converting my NUC to a single ZFS but with half the drive capacity?
 
Perhaps it is time I educate myself.
Done right (with just enough disks in the pool and enough NICs in bond) it's running great and working live migration gives me smiles. ;)

With that in mind, are there any problems with my concerns of the existing storage I have setup, needing unique names for the VM storage drives, or converting my NUC to a single ZFS but with half the drive capacity?
I haven't used or done that, you should wait for another user's answer with more knowledge. I think you need same names and same capacity for working zfs replication.
 
Done right (with just enough disks in the pool and enough NICs in bond) it's running great and working live migration gives me smiles. ;)
I will definitely research it when I've got a spare moment, if for no other reason than just to be better educated.
I haven't used or done that, you should wait for another user's answer with more knowledge. I think you need same names and same capacity for working zfs replication.
Appreciate the honesty on this one. I guess I could always replace the 500GB drive with a 1TB as a ZFS single drive pool and it would theoretically work. That said hopefully someone with experience setting it up this way will chime in as well.

Thanks again for the information and taking the time.
 
Yeah, I think zfs is a requirement, also don't know what's possible with LVM and replication.
Yeah, changing the single drive to ZFS isn't a big issue for me, would have to dump the VMs on that node to cluster anyway. Just trying to find out if the smaller drive is an issue and what I need to be aware of in terms of naming of pools across nodes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!