Soo I am going through the manual and I have some questions.
5.7.1 states:
The network should not be used heavily by other members, ideally corosync runs on its own network. Do not use a shared network for corosync and storage (except as a potential low-priority fallback in a redundant configuration).
5.7.2 further says:
When creating a cluster without any parameters the corosync cluster network is generally shared with the Web UI and the VMs and their traffic.
Sooo I do have some questions.
I have 4 Nodes
pve 1 initially set IP vmbr0 10.100.3.1
pve 2 initially set IP vmbr0 10.100.3.2
pve 3 initially set IP vmbr0 10.100.3.3
pve 4 initially set IP vmbr0 10.100.3.4
For Cluster Creation I would like to add another NIC to every Server
Let's say
pve 1 vmbr1 10.0.0.1
pve 2 vmbr2 10.0.0.2
pve 3 vmbr3 10.0.0.3
pve 4 vmbr4 10.0.0.4
Soo I would like to use 10.0.0.x as default corosync (Link1) and 10.100.3.x as backup. BUT 10.0.0.x is only connected to itself, so I don't have interfereces from outside. They're like an Island.
I want to use vmbr0, respective 10.100.3.x for WebUI and Internettraffic of the VM.
I do fear locking myself out of the UI with this.
As far as I understand the Proxmox Host will use the bridge with set Gateway for Internettraffic, right? Is the UI also hosted on the bridge with the set gateway?
Inter-VM Communication will also go through the Public-Link, right? So if two machines talk to each other, the Public Link will be the bottleneck, correct? Now I would bond 2x1GBit for Public-VM Traffic, but this would mean, that my VMs can only communicate with 2GBit shared between all. With the Ceph Storage it would be possible to saturate 2 GBit fairly easy (e.g. copying files from one VM to another). Can I make Proxmox route the Inter-VM traffic different from the Internettraffic? I would have 1 40GB IB available. Even with the about 50% performance loss from ib over eth on PVE 20GB inter-VM Connect would be fine. I just think Inter-VM traffic limited to 2 GBit would be kinda slow for the entirety of this cluster.
Further I will use Ceph
2 56GB Connect-X3 Adapters in 802.3ad mode as Link1
1 40GB ConnectX Infiniband as Link2
For Ceph I am kinda sure it's allright.
I am thankful for every advice
On a secong note, I could use the IB for Migration, since this needs much performance. But I luckily have a 10GB Switch and some cards lying here.. I could insert these cards, so the Public Interface could get up to 20GB (LACP). The Internet would be Connected to that Switch with 1GBit, thats enough, but the VMs would talk with high speed.
Well...
Would it possibly work to bond eno1 & 2, and ib1
balance alb would allow dynamic load balancing, would this work, or will these two networkpartitions rather have other backdraws?
5.7.1 states:
The network should not be used heavily by other members, ideally corosync runs on its own network. Do not use a shared network for corosync and storage (except as a potential low-priority fallback in a redundant configuration).
5.7.2 further says:
When creating a cluster without any parameters the corosync cluster network is generally shared with the Web UI and the VMs and their traffic.
Sooo I do have some questions.
I have 4 Nodes
pve 1 initially set IP vmbr0 10.100.3.1
pve 2 initially set IP vmbr0 10.100.3.2
pve 3 initially set IP vmbr0 10.100.3.3
pve 4 initially set IP vmbr0 10.100.3.4
For Cluster Creation I would like to add another NIC to every Server
Let's say
pve 1 vmbr1 10.0.0.1
pve 2 vmbr2 10.0.0.2
pve 3 vmbr3 10.0.0.3
pve 4 vmbr4 10.0.0.4
Soo I would like to use 10.0.0.x as default corosync (Link1) and 10.100.3.x as backup. BUT 10.0.0.x is only connected to itself, so I don't have interfereces from outside. They're like an Island.
I want to use vmbr0, respective 10.100.3.x for WebUI and Internettraffic of the VM.
I do fear locking myself out of the UI with this.
As far as I understand the Proxmox Host will use the bridge with set Gateway for Internettraffic, right? Is the UI also hosted on the bridge with the set gateway?
Inter-VM Communication will also go through the Public-Link, right? So if two machines talk to each other, the Public Link will be the bottleneck, correct? Now I would bond 2x1GBit for Public-VM Traffic, but this would mean, that my VMs can only communicate with 2GBit shared between all. With the Ceph Storage it would be possible to saturate 2 GBit fairly easy (e.g. copying files from one VM to another). Can I make Proxmox route the Inter-VM traffic different from the Internettraffic? I would have 1 40GB IB available. Even with the about 50% performance loss from ib over eth on PVE 20GB inter-VM Connect would be fine. I just think Inter-VM traffic limited to 2 GBit would be kinda slow for the entirety of this cluster.
Further I will use Ceph
2 56GB Connect-X3 Adapters in 802.3ad mode as Link1
1 40GB ConnectX Infiniband as Link2
For Ceph I am kinda sure it's allright.
I am thankful for every advice
On a secong note, I could use the IB for Migration, since this needs much performance. But I luckily have a 10GB Switch and some cards lying here.. I could insert these cards, so the Public Interface could get up to 20GB (LACP). The Internet would be Connected to that Switch with 1GBit, thats enough, but the VMs would talk with high speed.
Well...
Would it possibly work to bond eno1 & 2, and ib1
balance alb would allow dynamic load balancing, would this work, or will these two networkpartitions rather have other backdraws?
Last edited: