Adding more port to Hosts, redundancy, more bandwith

Manny Vazquez

Well-Known Member
Jul 12, 2017
106
2
58
Miami, FL USA
Hi,

I have a 3 hosts cluster, but sometimes the migrations, replications, etc impact the performance of the VMS > internet.

The servers I am using have 4 ethernet ports, I am only using one (with cable) as of now.

I am interested in increasing the bandwidth available IN BETWEEN the hosts, so they can transfer faster in between them.

What would be the best approach to this? I prefer a solution that does not involve rebooting the hosts.

Screen shot is for one of the hosts, but they are all identical, in hardware and setup.
upload_2018-11-16_9-13-10.png

I think that I could accomplish this in one of 2 ways (or maybe even both) as per this image
The black links are the ones that are now in place.
The red and blue are my proposals.
upload_2018-11-16_9-44-9.png
Actually doing both would utilize all the NICs , which I think is the ideal case.
Now, the questions are:
what modality to use?
IP address to use? (GTW, SUBNET) currently my whole network (the VMs and real machines, DBs, etc) are in 172.21.82.0/24
For instance Hosts main ip is 172.21.82.22, hosts 3 is 172.21.82.23 ..
VM1 > windows > 172.21.82.101
VM2 > Windows > 172.21.82.102
VM5 > Linux VM > 172.21.82.205

etc ..

So, I am not sure if I should use a new set of addresses, (like for instance 10.1.10.0/24) or stick to the one I know is working fine.

Also, I know there are options in the network , like in the next image
upload_2018-11-16_9-50-59.png
But I have no idea (and can not really understand the technical explanations in the manual) as to what would be the best solution in my case.
Basically, I have 3 empty NICs on each server, I have plenty of ports on my switch. I would like to make it so there is not one simple point of failure and increase performance, mostly in between the hosts for replication and moving large files in between.

Thanks
 
Hi,

I would use all ports and do it as you suggest.

For the full mesh, I would use a routed setup this brings in avg use more speed than a bridged one.
The downside is you can not use multicast in a routed network.

And yes you should use separate subnetworks for this three networks.

I would do it this way
Port1 172.21.82.0/24 -> switch Network for VM
Port2 10.10.2.0/24 -> switch Network for the first corosync ring [1]
Port3,4 10.10.2.0/24 -> mesh Network for replication, migration, second corosync ring [1] [2] [3]

or
Port1,2 Bond 172.21.82.0/24 -> switch Network for VM second corosync ring [4] [1]
Port3,4 10.10.2.0/24 -> mesh Network for replication, migration, first corosync ring [1] [2] [3]

For the bond use LACP if your switch supports this.
If not use Active-Backup mode.

1.) https://pve.proxmox.com/wiki/Separate_Cluster_Network
2.) https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
3.) https://pve.proxmox.com/wiki/Manual:_datacenter.cfg
4.) https://pve.proxmox.com/wiki/Network_Configuration#_linux_bond
 
I have a 3 hosts cluster, but sometimes the migrations, replications, etc impact the performance of the VMS > internet.

Hi,

You must check if in reality, the performance degradation source is network/bandwidth! I is also possible that your network to be only one part of your problem. It is possible that the other part of the problem to be I/O disk related. It is also posssible that after you solve the network part, to see that that your I/O disk (because of better netork speed => high I/O usage) will increase and the VM performance will be also as before with old network design!
I suggest to make some tests to see how it will be !
 
Hi,

I would use all ports and do it as you suggest.

For the full mesh, I would use a routed setup this brings in avg use more speed than a bridged one.
The downside is you can not use multicast in a routed network.

And yes you should use separate subnetworks for this three networks.

I would do it this way
Port1 172.21.82.0/24 -> switch Network for VM
Port2 10.10.2.0/24 -> switch Network for the first corosync ring [1]
Port3,4 10.10.2.0/24 -> mesh Network for replication, migration, second corosync ring [1] [2] [3]

or
Port1,2 Bond 172.21.82.0/24 -> switch Network for VM second corosync ring [4] [1]
Port3,4 10.10.2.0/24 -> mesh Network for replication, migration, first corosync ring [1] [2] [3]

For the bond use LACP if your switch supports this.
If not use Active-Backup mode.

1.) https://pve.proxmox.com/wiki/Separate_Cluster_Network
2.) https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
3.) https://pve.proxmox.com/wiki/Manual:_datacenter.cfg
4.) https://pve.proxmox.com/wiki/Network_Configuration#_linux_bond

Thanks for the detailed explanation.

I am getting 3 more (new) servers which was originally intended for Sql servers, but I am going to use them first for proxmox, to install the new 5.2 and do all this work before getting them into production. So I can test all these without impact to production.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!