Cluster with slightly different hardware

gth871r

New Member
Jul 24, 2021
3
0
1
41
I'm looking to deploy a Proxmox cluster with Ceph shared storage and high availability of the VMs including automatic failover. I've already got two HP Xeon E2200 6 core servers. Unfortunately, the exact model is discontinued. We've also been purchasing Supermicro based E2300 8 core servers. I know, in a perfect world, all three servers in the cluster would be identical but I understand that requirement isn't ironclad. Would these servers be close enough? What sorts of problems should I be watching out for are there any good tests to do to make sure everything is okay before going into production?
 
I don't know if i would say, that in a perfect world you use all the same machines for all nodes.
The postiv side is, if it works it should work on all nodes. .... And that's also the negative side.
If an update makes trouble or hardware makes trouble, it will cause trouble on all nodes.

An all Intel node cluster should not be any problem.

We are running a AMD / Intel mixed up cluster without any problems.
But we needed the opt in 5.19 Kernel to make live migration work between Intel and AMD
 
  • Like
Reactions: jsterr
  • You should use same architecture because of live-migration (didnt try 5.19) so intel or amd only
  • disks (osds) for ceph should be same performance-class, so no mixing of ssd/hdd/nvmes. If you do, the slowest device will be the bottleneck for your cluster performance. Best would be if the devices nearly have the same performance.
  • disk size also matters, although ceph automatically applies a value, so data distribution between those disks is also "ok" when your mixing disk sizes. but this can cause bad effects, if you for example loose a disk with 4TB disk and you need to recover those 4TB to smaller disks (for example 1TB Disks) could cause osd-full problem where on 90% usage (of one single osd) you cant use your pool anymore.
  • Network linkspeed should be the same for all ceph nodes.

Edit: with ceph shared storage your thinking about an external storage with ceph? no hyperconverged setup?
 
Last edited:
Edit: with ceph shared storage your thinking about an external storage with ceph? no hyperconverged setup?
Sorry misspoke, hyperconverged with ceph disks on the same physical systems as the Proxmox nodes. Probably doing 2 480GB drives in a ZFS mirror for booting and running Proxmox, and additional drives on each system using ceph for the storage of the VMs that will bounce back and forth. I need to learn a little more about ceph to nail down the best config for the VM storage.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!