Hello everyone,
I'm encountering an unusual networking issue in our 4-node cluster, and I’m hoping to get some insights or solutions from the community.
Cluster Setup:
Scenario 1: Using Linux Bridge
Scenario 2: Using SDN Zone
What I’ve Checked:
Questions:
EDIT: I´m using VXLan SDN Zones. When testing with a "simple" Zone on the same host, I can reach up tu 10Gbps between the two VM´s.
Thank you,
Luca
I'm encountering an unusual networking issue in our 4-node cluster, and I’m hoping to get some insights or solutions from the community.
Cluster Setup:
- Nodes: 4
- Network Cards: 25 Gbps
- Switch: MikroTik, 25 Gbps ports
Scenario 1: Using Linux Bridge
- Configuration: Two Windows VMs connected via a Linux bridge through the MikroTik switch.
- Performance: iperf tests show 15-20 Gbps.
Scenario 2: Using SDN Zone
- Configuration: Both VMs placed in the same SDN zone, running on two separate nodes.
- Performance: iperf tests drop to 1-1.5 Gbps.
- Same Node Setup: When both VMs are on the same node within the SDN zone, iperf improves to 2-3 Gbps.
What I’ve Checked:
- SDN Configuration: Verified that the 25 Gbps interfaces are utilized for the traffic. This is confirmed by monitoring the switch traffic.
- Network Adapter: Using the virtio network adapter.
- Multiqueue Options: Tried different multiqueue settings without any improvement.
Questions:
- Has anyone experienced similar issues with SDN zones in a multi-node setup?
- Are there specific SDN configurations or optimizations that might help achieve higher throughput between nodes?
- Could there be any limitations or bottlenecks within the SDN implementation that I might have overlooked?
EDIT: I´m using VXLan SDN Zones. When testing with a "simple" Zone on the same host, I can reach up tu 10Gbps between the two VM´s.
Thank you,
Luca
Last edited: