Network seperation help

crembz

Member
May 8, 2023
41
4
8
Hi everyone,

What is the best way to seperate different types of traffic on pve hosts?

I have 3 interfaces, 1x1g and 2x10g

I was thinking something along these lines, with any intervlan routing happening at the gateway:

1685853308559.png

I'm trying to understand how the gateway setup will work since it can only be set at one bridge. I have it currently set on the VMNetwork VLAN. Does that mean all traffic from the PVE host will route through that VLAN?

Also, how do I restrict the PVE management interface to respond only on VLAN 5? At the moment it responds on any of the accessible interfaces.
 
Why use two vmbr's ?
vmbr0 and vmbr1 interfaces can be usefull when you virtualize pfSense/OPNsense, creating a LAN and a WAN.
If you want to just use vm's, one vmbr will do.

I assume Switch 1&2 are managed ? You can configure VLANs in those switches.
For separating VLANs you must use port vlans (pvid) or you can use pfSense/OPNsens for firewalling the VLANs,
in case of VLAN hopping.
And why use two switches? Is this for redundancy?

What is running as a Gateway in your networkdiagram?
 
Why use two vmbr's ?
vmbr0 and vmbr1 interfaces can be usefull when you virtualize pfSense/OPNsense, creating a LAN and a WAN.
If you want to just use vm's, one vmbr will do.

I assume Switch 1&2 are managed ? You can configure VLANs in those switches.
For separating VLANs you must use port vlans (pvid) or you can use pfSense/OPNsens for firewalling the VLANs,
in case of VLAN hopping.
And why use two switches? Is this for redundancy?

What is running as a Gateway in your networkdiagram?
Hi there,

I'm running an end to end ubiquity network. Switch 1 is a 1g switch connection each hosts 1g connection. Switch 2 is a 10g switch. The Gateway is a udm se.

If adding all interfaces into one vmbr, how do you control which physical interface is used? I.e. how do you ensure that storage traffic does not end up going down the 1g link and vice versa?

Also if I set vids at the port ... Does this not effect what vlans will pass through to the vmbr? I.e say I set vid5 at the port, the physical NIC will have traffic tagged as vlan5. If I have vms on this bridge on a different vlan I didn't think that traffic would pass through.
 
Isolated networks don't need a gateway, that limits them to only communicating with other devices in the same subnet.

However, without VLANs, the other devices will 'see' the network packets, they just won't process them. With VLAN's the network is also divided logically and network packets on VLAN 2 will not be 'seen' by devices on VLAN 3 and vice-versa

Network traffic will 'home' to whatever network port is configured for that subnet and VLAN
 
Isolated networks don't need a gateway, that limits them to only communicating with other devices in the same subnet.

However, without VLANs, the other devices will 'see' the network packets, they just won't process them. With VLAN's the network is also divided logically and network packets on VLAN 2 will not be 'seen' by devices on VLAN 3 and vice-versa

Network traffic will 'home' to whatever network port is configured for that subnet and VLAN
Ok that makes sense, but I'm still a little confused on how you would route traffic from the pve host down the appropriate interface. How to force storage traffic (ceph or NFS) to go over the 10g iface and management to go over the 1g iface?
 
while you can get into advance configs with multiple routes and gateways, by default you can only have one gateway defined at the host level. That is not to say that your VM's have to use this gateway but again that's the usual state of affairs.

So, say your VM's are using 172.16.50.0/24 (on vmbr1) while your host is on 192.168.100.0/24 with a gateway of 192.168.100.1 (on vmbr0)

If a packet is sent from 172.16.50.10 (say a VM) to 172.16.50.20 (say a lan client) then traffic will route via vmbr1 which is on a 10Gb connection. Likewise traffic in the reverse direction will also route via the 10Gb connection.

However, this traffic will not be able to reach the internet as there is no gateway.

So you setup some iptables rules on the host (or you create a VM to enable routing) and then traffic destined for anywhere 'other than' the 172.16.50.0 network will go via vmbr0 and will be on the 1Gb connection but traffic within the same subnet will stay on the 10Gb connection.

Same rules apply for ceph and nfs, if they have interfaces assigned to the 10Gb port and are assigned IP's that can be reached via the 10Gb link then that's what will be used. If you are going to allow different VLAN's and subnets and want them to be inter-connected then you will need to setup some form of inter-vlan routing.
 
I see so long as I connect to the NFS share/ceph cluster using the appropriate IP address/vlan, traffic will be constrained to that NIC automatically. My l3 switch can handle intervlan routing but I'm not sure it'll be needed for storage traffic.
So I'm guessing if the host then make a request on an IP address outside of what is already configured on the NICs it'll use the default gateway ... Is that about right?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!