[solved] network configuration

Elleni

Active Member
Jul 6, 2020
170
9
38
51
I have 2 pve nodes which have a card with two mellanox connect x 3 EN 40 Gbit ports. I want to use at least one of them to directly connect the two nodes per cable for the cluster traffic.

Question: Can the system profit if I connect the pve nodes twice by 40 Gbit for clustertraffic or will only one of the 40-Gbit connections used?

The normal network for vms and for access to the webinterface will will be connected to 10 Gbit SFB Switch.
 
Last edited:
Thanks. Pardon me if this doesnt make sense as I never used it before, but would a bond of two of these mellanox ports and then using this bond as link device increase the maximal performance of the cluster network, or is 40 Gbit more than enough for the clusternetwork between two nodes anyway?

Another question. For the other network we use a 10 Gbit SFP+ intel nic and I had to set offload-rx-vlan-filter off in order to get connectivity. If I understood that correctly I could either enable this filtering with ethtool or disable it in the configuration of the networking.

What is this vlan-filtering used for and what the recommended way to proceed? Is it correct that - provided it is supported by the nic - it would increase the throughput of the nic when enabled thus the filtering would be done by hardware instead of software or do I missunderstand this?
 
Last edited:
Thanks. Pardon me if this doesnt make sense as I never used it before, but would a bond of two of these mellanox ports and then using this bond as link device increase the maximal performance of the cluster network, or is 40 Gbit more than enough for the clusternetwork between two nodes anyway?
Corosync needs a stable, low latency connection, throughput itself is not important. Regarding redundancy it would be better to use two connections over separate switches than using a bond running over single switch, because the switch would be the induced single point of failure.

Another question. For the other network we use a 10 Gbit SFP+ intel nic and I had to set offload-rx-vlan-filter off in order to get connectivity. If I understood that correctly I could either enable this filtering with ethtool or disable it in the configuration of the networking.

What is this vlan-filtering used for and what the recommended way to proceed? Is it correct that - provided it is supported by the nic - it would increase the throughput of the nic when enabled thus the filtering would be done by hardware instead of software or do I missunderstand this?
If the offloading is enabled, vlan tag removal on incoming packets is done by the network card. Result should be a reduced cpu utilization on the host.
One possibility to disable it is a post-up command with ethtool in /etc/network/interfaces.
 
  • Like
Reactions: Elleni