Cluster Topology - Proxmox 5.2

Talion

Active Member
Jun 19, 2018
15
0
41
47
Hello,

I would like to build a HA Proxmox Cluster. I reserve 5 x server for this project with these specs;


2 x Server;

Fujitsu PRIMERGY RX1330 M3
Intel Xeon E3-1230v6 3.50GHz
Memory: 32GB ECC DDR4 RAM
2x 120GB SSD (OS)
2x 250GB PCI-E SSD (Pool) (planing to use Raid1)
2x Intel Corporation 82599ES 10-Gigabit SFI/SFP+

3 x Server;

Fujitsu PRIMERGY RX2510 M2
2x Intel Xeon E5-2620v4 2.10GHz
96GB REG ECC DDR4 RAM
2x 120GB SSD (OS)
4x 250GB PCI-E SSD (Pool) (planing to use Raid10)
2x Intel Corporation Ethernet 10G 2P X520 Adapter

My VM's workload will be bandwidth consuming. Each node will consume up to 10Gbps bandwidth connection. Mostly nothing cpu intensive.

All hosts will be linux.

My questions are;

1. Shall i use hardware raid or software raid ? (Either for OS or VM Pool)
2. Shall i use LVM+EXT4 or ZFS ? (Either for OS or VM Pool)
3. Shall i use a specific partition structure for Nodes?
4. Shall i use SR-IOV?
5. Shall i use linux bridge or openvswitch ?
6. Shall host-model vs host-passthrough for my kvm setup ?
7. Can i use big swap space for overcommitment(extra ram for VMs) since i have got nvme ?
8. Do you advise to install Proxmox from its .ISO or on tap of any other OS like Debian Stretch etc.. ?
9. Is there any adviced hardening guide that google may hide from me ?
10. Which points i should consider rather these steps for a stable Proxmox Cluster. I want a fully redundant cluster for my workload.


Thank you so much to everyone for their feedbacks.

-talion
 
Hi Talion,

You do not say nothing what do you espect regarding I/O operations (mostly read/write/wahtever) on your VMs. Do you will use separte storage LAN?
 
Hi Guletz,

Yes there will be seperate network for Storage and VM Internet and perhaps management.

My disk I/O operation will be low. Most of my read/write operation will be done on ramfs. We are building a streaming system like twitch.tv.


Best Regards,

Talion
 
1. Depends if you are going to use local storage, clustered storage (ceph), or SAN. How are you accessing your source data?
2. for streaming, lvm+xfs. ZFS has the potential to slow down unpredictably as the volumes fill up.
3. Not sure what you mean by this question.
4. see 1
5. linux bridge unless you're planning to use some sophistication with L2 routing.
6. I am guessing you're asking about your network interface. bear in mind that a nic passed to a VM will necessarily mean its NOT AVAILABLE to the host or other VMs. with a single 2 port NIC that isnt really workable ;)
7. This is a loaded question. I dont really know how to answer. my sense is no but I dont truly understand your use case; streaming does not normally require much RAM at all.
8. installing from the proxmox installer is the quickest and easiest way to get there.
9. laughs. too big a question for here. lots of books available.
10. fully redundant can mean disk, host, cluster, data center, continent...
 
  • Like
Reactions: Talion
Hi again,

My knowledge about video streaming is almost like /dev/null, so my responses may be wrong. I can guess that you will need the highest RAM as you can get (you mention ramfs).
So if you need more RAM, then I guess that zfs will not be an option(6-8 Gb / each node).
1. I will use no hw raid, I will install a debian using only 2xssd (mdraid for a OS mirror), and then I install PMX
2. lvm+ext4 if zfs is not a option
3. many os partition can improve the nodes security (like, /var, /boot, /tmp, /home and /usr)
5. bridge is ok
9. it could be unnumbered advice about how you can hardening any setup, but any security improvement have a downside (time to spent, watch your systems, lower your performance, and many many more)
10. ... hard to say, but in my own case, if I want a stable system (host, network, an so on) I start to do like this:
- make a good plan on paper(including my critical goals, my not so critical goals, and nice to have)
- discuss this plan with my friends (from their perspective )
- correct my initial plan and maybe some goals
- study the Internet (fail and succes story) for each pieces of software / hardware that I need to use (30-60 days)
- start to get the hardware for my final plan
- test the hardware (storage, memory, neworking, and so on
- modify the plan if it is the case
- deploy all software(OS included) that you plan, and try to use any catastrophic scenarios that have a good chance to happened (one disk is broken, some network segment is broken, one server is unavailable, power is down, test your backups, and so on)
- go to the next step ... improve your system performance and your security
- validate again all your segments (hardware, full/partial recovery, backups, and so on)
- Monitorize your sistem (capacity problems, hardware/software fails, load...)
- CREATE procedures for most likely bad events ( reinstall a node, recover a VM/service and so on)

... and most important document anything you do (succes and fail)

This take a lot of time and resurseces, but in long term you will be happy; ) And do not think after some time that you do not need to do nothing ... review your plan/procedures in a loop.
I usually invite my friends to see what I have done and they must find any problems that they can discover (so I can get a good reason for a friendly beer ;) )

... is not so hard if you want to do ;)
 
  • Like
Reactions: Talion
Thank you so much for the responses. I have cover most of the parts. Now there is a few thing left.

I have got 5 server and they are in 2 group i mean 2 identical group. 2x Fujitsu PRIMERGY RX1330 M3 + 3x Fujitsu PRIMERGY RX2510 M2. So to use ceph cluster shall i make a general cluster with 5 of them and create a ceph cluster with 3x RX2510 ? So i can manage 5x PMX server centrally and use 3x server for ceph storage. If my question is not clear please let me know.


Best Regards,

Talion
 
Hello Alex,

I want to use CEPH as my storage system. Is SR-IOV not possible if i will use live migration ? Or is there any further limitation if i use SR-IOV ?

Best Regards,

Talion


1. Depends if you are going to use local storage, clustered storage (ceph), or SAN. How are you accessing your source data?
2. for streaming, lvm+xfs. ZFS has the potential to slow down unpredictably as the volumes fill up.
3. Not sure what you mean by this question.
4. see 1
5. linux bridge unless you're planning to use some sophistication with L2 routing.
6. I am guessing you're asking about your network interface. bear in mind that a nic passed to a VM will necessarily mean its NOT AVAILABLE to the host or other VMs. with a single 2 port NIC that isnt really workable ;)
7. This is a loaded question. I dont really know how to answer. my sense is no but I dont truly understand your use case; streaming does not normally require much RAM at all.
8. installing from the proxmox installer is the quickest and easiest way to get there.
9. laughs. too big a question for here. lots of books available.
10. fully redundant can mean disk, host, cluster, data center, continent...
 
I want to use CEPH as my storage system. Is SR-IOV not possible if i will use live migration ? Or is there any further limitation if i use SR-IOV ?

The question is WHAT do you want to use it for, and what hardware do you intend to pass? to be clear, a device cant be used by the hypervisor AND passed to a vm at the same time...

(edit- its technically possible but doesnt yield stable results. I should say that interfaces used for cluster communication should not be shared with vms in any manner, be it sr-iov or bridged)
 
Last edited:
  • Like
Reactions: Talion
>>Each node will consume up to 10Gbps bandwidth connection.

with small packets ?

for virtio vs sr-iov, it's more about pps than bandwith. If you need a lot of pps with small packets (virtual router), maybe it's better with srv-io.

currently it's not possible to live migration sr-iov, but I have see some news recently on qemu devel mailing list, about a new feature, with srviov primary + virtio backup when live migration occur.
 
  • Like
Reactions: Talion

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!