Best practice for setting up networking in Proxmox

draak

Member
Mar 1, 2012
19
0
21
Hello,

I've got an upcoming project where I need the following setup. I would appreciate your thoughts on this and what would be the "best practice".

Requirement:
I have 4 interfaces divided in 2 bonds.
There will be a mixture of KVM and CT. Each virtual server will need access to both client and storage network.
One of the bonds needs 802.3ad (LACP) and VLAN tag as this will be used for separate storage network.

Issues:
1. multiple gateways, one for client one for storage
GUI doesn't appear to allow for multiple gateways (I don't need two default gateways), although there is no issue in setting this up via CLI
Is this the correct approach however? Perhaps these two gateways should not be setup on the host?

2. VLAN on a bond
How do you set this up? Does VLAN tagging work only on an interface, for example bond1 containing eth2.10 eth3.10 and not bond1.10?
This link for VLAN wiki or this one Network model isn't very clear to me (sorry).

Thank you.
 
Lets assume the following:

bond0 will be the bond that does not require VLAN tagging
bond1 will be tagging.

Then you would create bridges and vlans like this:

vmbr0 bridging bond0 and any virtual network interface from VMs that need access to this network.


1.) the gateways will be set inside the VMs then

2.) You can create VLAN interfaces ontop of bonds like bond1.10. You would then create a bridge... let's say vmbr10 (if you have / might be having multiple VLANs to connect to later - make it easier on yourself by naming the bridges vmbr$VLANID) and connect your VMs virtual nics to this bridge as well.

Theres one caveat. Previously you could not add bondX.Y interfaces to bridges via the gui because it wouldnt let you. I dont know whether thats still the case, but if it is, you have to create these directly in /etc/network/interfaces. Doing so is pretty simple though and would look like this:

Code:
auto bond1.900
iface bond1.900 inet manual
        vlan_raw_device bond1

auto vmbr900
iface vmbr900 inet manual
        bridge_ports bond1.900
        bridge_stp off
        bridge_fd 0
 
Hello mo_,

I've exported NFS share on NAS, and set NFS storage in GUI. Nodes mount it fine.
I then restricted NFS share to storage network only and nodes then can't mount the share.

Normally I'd add static route for the new interface, which in my case is vmbr<vlan number>.

Is there any established way of resolving this in Proxmox?

Cheers.
 
Maybe there is one, but personally Id always go for the way that you know to be working. In your case a static route. The only thing to consider here is whether your route might not survive a reboot, so youre going to want to add the route to /etc/network/interfaces and where you have the vmbr500, you would have something like
Code:
iface vmbr500 inet manual
 address X
 netmask Y
 gateway Z
 post-up route add <NAS-IP> dev vmbr500
...or however your static route would look like.

The last thing to consider after doing this is that proxmox automatically generates the /etc/network/interfaces file if you change something network related in the GUI. And the mechanism to detect the current configuration from the file has been man-made and obviously you can only consider so many configs to check for (as the developer of this automatic routine). This means that doing something a little bit more exotic in the interfaces file could get removed by that mechanism if you alter the config via the webinterface. This means that it may be required to double-check the interfaces.new file that is being created by the GUI if you change something, whether or not it still has your post-up rule, if it doesnt - you need to add it there too (since the interfaces.new file will overwrite the interfaces file on next reboot/networking-restart).
 
Last edited:
Hello,

I pulled this together from various Debian/Ubuntu sources as information from the official wiki didn't work for me for some reason.
Changes to /etc/iproute2/rt_tables weren't required and post-up/down "ip route add table..." didn't work.

This config is for two networks, default client network (no bond yet) and LACP storage network bond with VLAN tag.

root@undo:/etc/network# cat interfaces
Code:
auto bond1.42
iface bond1.42 inet manual
    vlan-raw-device bond1

auto bond1
iface bond1 inet manual
    slaves eth2 eth3
    bond_miimon 100
    bond_mode 802.3ad

auto vmbr0
iface vmbr0 inet static
    address  10.0.12.112
    netmask  255.255.255.128
    gateway  10.0.12.1
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0

auto vmbr42
iface vmbr42 inet static
    address  10.0.16.29
    netmask  255.255.255.128
    bridge_ports bond1.42
    bridge_stp off
    bridge_fd 0
    post-up route add -net 10.0.16.0 netmask 255.255.255.128 gw 10.0.16.1 dev vmbr42
    post-down route del -net 10.0.16.0 netmask 255.255.255.128 gw 10.0.16.1 dev vmbr42


To verify that all is well, run the commands below.

Routing:
root@undo:/etc/network# netstat -rn
Code:
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
10.0.12.0     0.0.0.0         255.255.255.128 U         0 0          0 vmbr0
10.0.16.0     10.0.16.1     255.255.255.128 UG        0 0          0 vmbr64
10.0.16.0     0.0.0.0         255.255.255.128 U         0 0          0 vmbr64
0.0.0.0         10.0.12.1     0.0.0.0         UG        0 0          0 vmbr0

LACP
root@undo:~# cat /proc/net/bonding/bond1
Code:
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
    Aggregator ID: 1
    Number of ports: 1
    Actor Key: 17
    Partner Key: 7
    Partner Mac Address: <MAC removed>

Slave Interface: eth2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: <MAC removed>
Aggregator ID: 1
Slave queue ID: 0

Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: <MAC removed>
Aggregator ID: 2
Slave queue ID: 0

VLAN
root@undo:/etc/network# cat /proc/net/vlan/config
Code:
VLAN Dev name     | VLAN ID
Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
bond1.42       | 42  | bond1

Hope that someone will find it useful and also, please let me know if there is anything above that could be improved on.

Cheers.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!