Understand Bridge, Bond, Routed, etc

ejc317

Member
Oct 18, 2012
263
0
16
Hi,

So on a KVM VM of ours, we can only access IPs bound to VMBR0, however on an OpenVZ VM, we are strangely able to access through ALL our nics - which is quite dangerous

we have 4 nics on each node, 2 for pub 2 for priv. This OpenVZ vm was only given VENET which is only assigned a public IP - but it can ping out of our private network!!

Any ideas?

We chose the routed option - should we choose bridge instead?
 

ejc317

Member
Oct 18, 2012
263
0
16
Let me go into a bit more detail

We have a SAN with 16 NIC ports going into 2 switches (8 each)

We have a proxmox test setup with 4 nodes with 4 ports each.

We have 2 of the ports going into the switch for public network connecting (active / passive failover)
We have 2 of the ports going into the switches for private network connectivity

Our questions are

1) Should we trunk the bonded ports on the switch side? Seem to be getting packet loss
2) Can you bond say the 2 ports but they're on different switches?
3) How do we take advantage of the 16 NIC SAN?

To us it seems like bonding the san and trunking it is a solution but we're gettign 10mb/s !! i/o. This is horrendous given this is a full SSD SAN
 

ejc317

Member
Oct 18, 2012
263
0
16
1) If you bond 2 ports on different switches yes it works on server side but there's no way to trunk them across switches ... we'll get packet loss?

2) SAN vendor suggests MPIO but that is not feasible on the server side
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!