Greetings,
We're running ProxMox 7.1-5 on Dell R7525 servers (AMD cpu) with (2) Dell Broadcom 57414 Dual Port 25GbE SFP+ nic cards attached.
We've got (2) 25GbE switches cabled correctly and connected to our ISCSI storage appliance.
One card is in Riser 1 and the other is in Riser 3; I'll refer to the ports as R1P1,R1P2,R3P1,R3P2.
We are using (3) of the (4) nic ports on the 25GbE cards to connect to the ISCSI appliance.
R1P1/2 and R3P1 are used for our ISCSI SAN network and R3P2 is used as a dedicated nic for our migration network.
We are using multipath to map our LUNs to LVM VGs within ProxMox; this all works very well when it's up and running, but as soon as we start to test the redundancy we get very odd results.
On boot the network is unavailable even though `ip a` shows all the interfaces as up.
Making any change to the host networking through the gui (such as adding a comment) and clicking apply configuration results in the networking coming back up.
Additionally, if the networking is working and we physically disconnect a cable from any SFP+ port the entire SAN network becomes unavailable (can't even ping) until an ifdown/ifup is performed against the nic that has been physically disconnected. Once this ifdown/ifup is performed the networking becomes available again and ping works again.
When this network goes down, obviously the multipath fails, but we've also seen stranger results in that if we disconnect every port except for R3P1 we can access both mpatha and mpathb, but if we leave all the ports connected and disconnect R3P1 we lose both mpatha and mpathb; we believe this is a result of the networking issue aforementioned, but definitely something odd happening here.
Thanks,
Taylor
We're running ProxMox 7.1-5 on Dell R7525 servers (AMD cpu) with (2) Dell Broadcom 57414 Dual Port 25GbE SFP+ nic cards attached.
We've got (2) 25GbE switches cabled correctly and connected to our ISCSI storage appliance.
One card is in Riser 1 and the other is in Riser 3; I'll refer to the ports as R1P1,R1P2,R3P1,R3P2.
We are using (3) of the (4) nic ports on the 25GbE cards to connect to the ISCSI appliance.
R1P1/2 and R3P1 are used for our ISCSI SAN network and R3P2 is used as a dedicated nic for our migration network.
We are using multipath to map our LUNs to LVM VGs within ProxMox; this all works very well when it's up and running, but as soon as we start to test the redundancy we get very odd results.
On boot the network is unavailable even though `ip a` shows all the interfaces as up.
Making any change to the host networking through the gui (such as adding a comment) and clicking apply configuration results in the networking coming back up.
Additionally, if the networking is working and we physically disconnect a cable from any SFP+ port the entire SAN network becomes unavailable (can't even ping) until an ifdown/ifup is performed against the nic that has been physically disconnected. Once this ifdown/ifup is performed the networking becomes available again and ping works again.
When this network goes down, obviously the multipath fails, but we've also seen stranger results in that if we disconnect every port except for R3P1 we can access both mpatha and mpathb, but if we leave all the ports connected and disconnect R3P1 we lose both mpatha and mpathb; we believe this is a result of the networking issue aforementioned, but definitely something odd happening here.
Thanks,
Taylor