Proxmox with Docker (bonding) network help

flinte

Active Member
Jun 1, 2018
16
0
41
42
So I got a new server for my Docker Plex environment, a beefy dual Xeon Dell Precision. I thought I would run a virtual lab environment on the same hardware so I installed Proxmox but I am struggling coming from more of a vmware background. I really like Plex on Docker for its community and ease of use so I thought I would continue to use it with my new server. So I installed Proxmox and then installed Docker and Rancher and migrated my docker containers now I am trying to get the interfaces file working correctly and its fighting me every step of the way.
This machine has onboard dual 1GB nics and also I installed a intel 4 port 1GB nic card I had laying around, so 6x ports total. Its all plugged into a switch (Netgear GS748TP) that supports 802.3ad.
My existing network is a simple 192.168.1.0/24 and I am use to only having one subnet and I am fine with that, in fact everything is set to expect my plex docker at 192.168.1.15, but I can easily change that...
So here is my problem, I cant configure this interfaces file so that I get networking to the guest VMs and LXC containers but then the hypervisor and docker containers cant reach my network. Here is an example of my interfaces file with that issue:
auto lo
iface lo inet loopback
auto enp2s0f0
iface enp2s0f0 inet manual
auto enp2s0f1
iface enp2s0f1 inet manual
auto enp2s0f2
iface enp2s0f2 inet manual
auto enp2s0f3
iface enp2s0f3 inet manual
auto eno1
iface eno1 inet manual
auto enp7s0
iface enp7s0 inet manual

auto bond0
iface bond0 inet manual
address 192.168.1.15
netmask 255.255.255.0
gateway 192.168.1.1
slaves enp2s0f0 enp2s0f1 enp2s0f2 enp2s0f3
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2+3

auto bond1
iface bond1 inet static
address 192.168.1.6
netmask 255.255.255.0
slaves eno1 enp7s0
bond_miimon 100
bond_mode balance-rr

auto vmbr0
iface vmbr0 inet static
address 10.10.10.2
netmask 255.255.255.0
gateway 10.10.10.1
bridge_ports bond0
bridge_stp on
bridge_fd 0
#VM_Subnet

up ip route add 192.168.1.0/24 dev vmbr0

Alternatively, I have discovered if I change my interfaces file to this I have full internet and network access to the hypervisor and the docker containers but not the Proxmox guest VMs and LXC containers! I cant seem to get everything to work all together. Any advise would be appreciated.
auto lo
iface lo inet loopback
auto enp2s0f0
iface enp2s0f0 inet manual
auto enp2s0f1
iface enp2s0f1 inet manual
auto enp2s0f2
iface enp2s0f2 inet manual
auto enp2s0f3
iface enp2s0f3 inet manual
auto eno1
iface eno1 inet manual
auto enp7s0
iface enp7s0 inet manual

auto bond0
iface bond0 inet manual
slaves enp2s0f0 enp2s0f1 enp2s0f2 enp2s0f3
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2+3

auto bond1
iface bond1 inet static
address 192.168.1.6
netmask 255.255.255.0
slaves eno1 enp7s0
bond_miimon 100
bond_mode balance-rr
#Management

auto vmbr0
iface vmbr0 inet static
address 192.168.1.15
netmask 255.255.255.0
gateway 192.168.1.1
bridge_ports bond0
bridge_stp on
bridge_fd 0
#Docker

auto vmbr1
iface vmbr1 inet static
address 10.10.10.2
netmask 255.255.255.0
gateway 10.10.10.1
bridge_ports bond0
bridge_stp on
bridge_fd 0
#VM_Subnet

up ip route add 192.168.1.0/24 dev vmbr1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!