Local network of containers how to

luison

Renowned Member
Feb 22, 2010
145
5
83
Spain
elsurexiste.com
Hi. On a PVE4 upgrade we are having some issues to understand best setup for local traffic among containers.

Some containers only have "local network" IP/interfaces (10.0.0.XXX) while others have various public ones + a local. Host also has an interface with a local network IP.

Up till now traffic between container X and Y via local network IPs (10.0.0.XXX) worked fine but not now. I am uncertain this is something to do with our config on the containers or routers on the host.

Currently our containers definition for local network is of type 10.0.0.XXX/24 while perhaps it should be 10.0.0.XXX/32 so all traffic is routed via the gateway which I understand is the host interface.

Is that correct or am I missing any routes on the host so traffic from container20 (10.0.0.20) will reach directly container30(10.0.0.30)

My host local net interface definition is as follow:

auto vmbr10
iface vmbr10 inet static
address 10.0.0.1
netmask 255.255.255.0
broadcast 10.0.0.255
network 10.0.0.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
#post-up echo 1 > /proc/sys/net/ipv4/conf/vmbr0/proxy_arp
post-up echo 1 > /proc/sys/net/ipv4/conf/all/proxy_arp
post-up iptables -t nat -A POSTROUTING -s '10.0.0.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.0.0.0/24' -o vmbr0 -j MASQUERADE​



Thanks.
 
edit: sorry, I posted some wrong information, removed it

the network definition 10.0.0.0/24 should be ok. as long as both containers are on the same host connected to the same bridge they should be able to use the bridge like a switch.

I think you don't need the broadcast and the network line

Don't take my information to be 100% correct though, I just started working with bridging and iptables.
 
Last edited:
All containers are on the same bridge (?), so they can reach each other without any network definition on the host. So what exactly is the problem?
 
Initially the issue was not being able to connect from one to the other but I've now realized I have a "higher" problem with network traffic in general for those containers as they don't seem to be able to root traffic to the internet if they only have a local network.

This is:

HOST:
Code:
# The loopback network interface
auto lo
iface lo inet loopback

# for Routing
auto vmbr1
iface vmbr1 inet manual
        post-up /etc/pve/kvm-networking.sh
        bridge_ports dummy0
        bridge_stp off
        bridge_fd 0

auto vmbr10
iface vmbr10 inet static
        address  10.0.0.1
        netmask  255.255.255.0
        broadcast 10.0.0.255
        network 10.0.0.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        #post-up echo 1 > /proc/sys/net/ipv4/conf/vmbr0/proxy_arp
        post-up echo 1 > /proc/sys/net/ipv4/conf/all/proxy_arp
        post-up iptables -t nat -A POSTROUTING -s '10.0.0.0/24' -o vmbr0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.0.0.0/24' -o vmbr0 -j MASQUERADE



# vmbr0: Bridging. Make sure to use only MAC adresses that were assigned to you.
auto vmbr0
iface vmbr0 inet static
        address 164.xxx.xxx.xxx
        netmask 255.255.255.0
        network 164.xxx.xxxx.0
        broadcast 164.xxx.xxx.255
        gateway 164.xxx.xxx.254
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

#ip route (on host)
Code:
default via 164.xxx.xxx.254 dev vmbr0
10.0.0.0/24 dev vmbr10  proto kernel  scope link  src 10.0.0.1
164.xxx.xxx.0/24 dev vmbr0  proto kernel  scope link  src 164.xxx.xxx.xxx

So route decision I understand is correct:
Code:
ip route get 5.135.xxx.17 from 10.0.0.1
5.135.xxx.17 from 10.0.0.1 via 164.xxx.xxx.254 dev vmbr0


ON THE CONTAINER WITH LOCAL ADDRESS ONLY:

Code:
#cat /etc/network/interfaces
auto eth0
iface eth0 inet static
        address 10.0.0.109
        netmask 255.255.255.0
        gateway 10.0.0.1

# ip route
Code:
default via 10.0.0.1 dev eth0
10.0.0.0/24 dev eth0  proto kernel  scope link  src 10.0.0.109
10.0.0.0/23 via 10.0.0.1 dev eth0  src 10.0.0.109

No external traffic at all.
Also and assuming it has nothing to to with it but I was hoping to point resolve.conf to the hosts local IP (10.0.0.1 in this case) but any change I do to resolve.conf reverts to 127.0.0.1

Any help would be appreciated.
 
We've seemed to track the issue but I am not really sure of the explanation or correct/recommended configuration.
On previous PVE versions we had on vmbr10 (private lan) definition:

Code:
post-up iptables -t nat -A POSTROUTING -s '10.0.0.0/24' -o vmbr0 -j MASQUERADE

As we use CSF as a firewall on the host we also have the same on a csfpost.sh script that gets executed when iptables changes.

As we presumed the issue was related to external traffic reaching the containers we tried alternatives from OpenVZ doc (https://openvz.org/Using_NAT_for_container_with_private_IPs) and tried this instead:

Code:
iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o vmbr0 -j SNAT --to 5.xxx.xxx.xx   ---> external public host IP
which works.

On that same document both commands should do the same, so I wonder if this is to do with any missing IPTABLES module or change of syntax on Debian 8, etc

For the time being we are changing to the first command in /etc/networks/interfaces for the host till we can understand why but I am unsure if this could be related to something else of is correct.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!