Hi,
I will need 4 types of networks in my environment (I guess anybody who runs ProxMox at enterprise level would need them):
1) the one for corosync (ideally, to have a dedicate one);
2) the one for web-GUI/SSH;
3) the one for the data of the VMs (a lot of VLANs);
4) the one for the disks of the VMs hosted on the NFS share (let's call it "NFS disk network")
We have 3x HP DL380 with 4 built-in copper interfaces and a card with two SFP+ slots (2 of them will have optical SFPs, 1 just copper)
We have a Cisco stack with two units. I'm going for redundacy and try to reduce the impact in case one of the switches dies.
Hence I try to use LACP bonds whenever possible. In the case above I'd use 4 LACPs but I'd need 8 interfaces. Hence I have to redesign how to use the interfaces.
Considerations
The datacentre is run in an environment where we cannot guarantee a rapid intervention in case of issues, hence things must run even though one of the units of the stack fails and all the services must stay up for a week or two before the intervention for fixing.
In terms of criticality, I identified the most 1) and 4) (you don't want to miss neither a heartbeat nor you want your VMs have sluggish access to their disks). It's true that I can distribuite heartbeats over two L3 segments (so redundacy is done at L3 rather than by coupling L2 links).
So maybe I can remove the LACP for assign it to the VMS communications and then spread the corosync heartbeats over these three links (NFS disks network, the VMs' comms and the Web-GUI/SSH).
Anyone who would like to share his/her approach on this topic?
Alex
I will need 4 types of networks in my environment (I guess anybody who runs ProxMox at enterprise level would need them):
1) the one for corosync (ideally, to have a dedicate one);
2) the one for web-GUI/SSH;
3) the one for the data of the VMs (a lot of VLANs);
4) the one for the disks of the VMs hosted on the NFS share (let's call it "NFS disk network")
We have 3x HP DL380 with 4 built-in copper interfaces and a card with two SFP+ slots (2 of them will have optical SFPs, 1 just copper)
We have a Cisco stack with two units. I'm going for redundacy and try to reduce the impact in case one of the switches dies.
Hence I try to use LACP bonds whenever possible. In the case above I'd use 4 LACPs but I'd need 8 interfaces. Hence I have to redesign how to use the interfaces.
Considerations
The datacentre is run in an environment where we cannot guarantee a rapid intervention in case of issues, hence things must run even though one of the units of the stack fails and all the services must stay up for a week or two before the intervention for fixing.
In terms of criticality, I identified the most 1) and 4) (you don't want to miss neither a heartbeat nor you want your VMs have sluggish access to their disks). It's true that I can distribuite heartbeats over two L3 segments (so redundacy is done at L3 rather than by coupling L2 links).
So maybe I can remove the LACP for assign it to the VMS communications and then spread the corosync heartbeats over these three links (NFS disks network, the VMs' comms and the Web-GUI/SSH).
Anyone who would like to share his/her approach on this topic?
Alex