As for slow speeds, here's the description
The current setup is as such
4 Independent nodes with 4 ports each. 2 of these ports are on switch 1 and 2 of these ports are on switch 2. Currently, each “set of ports” on each switch is 1 public and 1 private per node.
Our SAN has 16 network ports 8 of these are plugged into switch 1 and 8 of these are into switch 2.
The way the switches are configured is rather basic.
- 2 independent uplinks from the upstream host into each switch with a next-hop gateway
- VLAN 2 for all public network traffic
- VLAN 3 for all private network traffic (no gateway)
- There is a 4 port (4GBPS) trunk (brocade speak for CISCI etherchannel) between VLAN 3 on switch #1 an VLAN 3 on switch #2 (effectively, the switches are stacked via this trunk)
- Currently removed any tagged ports (brocade speak for CISCO trunks) between VLAN 2 and VLAN 3
- Spanning-tree is on
There is no bonding at the moment and we tested a very simple KVM VM on 1 of the node with HA.
We tested DD with a 8kb write size and 10k count and got back 10mbps! on the SAN itself we got over 2.5gbps. On a standalone SATA server, we got 500mbps +
We test HDParm similarly across setups and got a low number
Networking Options considered
- Bond private nics on the proxmox nodes (mind you they are on 2 separate switches …)
- Bond public nics on the proxmox nodes (again, separate switches and place them in active / passive failover mode)
What is the optimal setup given our hardware?