Internal network between containers

Henrique

Active Member
Feb 8, 2016
33
0
26
31
Hey there.

I'm having a few troubles implementing the following scenario: I have a dedicated server from OVH. I have a few fallover IP's that are working fine with my LXC containers.

The problem is, I want to implement some internal networking (192.168.1.0/24 for example), so I can set up an internal NFS.

I did a lot of search and I couldn't find a way to create that internal network. I feel that I'm missing something. Do I need to have a dedicated container just acting as router? Or is it possible to archive only with bridges?

Can someone provide me a step-by-step instructions on how to set up the damn thing?

I really appreciate all your help!

Henrique
 
I did it, and it's not working...

I have this bridge, set up this way:

https://cloudup.com/cTxjh3dGBhP

On the containers I have two interfaces, eth0 and eth1, where eth1 is the internal interface, and it's configured as follows:

https://cloudup.com/cUph80duvpz

I have the same setup for another container (with a different IP), however I can't ping from one container to another.

What am I doing wrong? :(
 
I did it, and it's not working...

I have this bridge, set up this way:

https://cloudup.com/cTxjh3dGBhP

On the containers I have two interfaces, eth0 and eth1, where eth1 is the internal interface, and it's configured as follows:

https://cloudup.com/cUph80duvpz

I have the same setup for another container (with a different IP), however I can't ping from one container to another.

What am I doing wrong? :(

this is strange , it should work.

can you ping the bridge ip from the containers ?
what is the ouput of "brctl show" , when containers are running ?
 
Here's the output:

Code:
root@blackpearl:~# brctl show
bridge name    bridge id        STP enabled    interfaces
vmbr0        8000.74d02b26be58    no        eth0
                            veth102i0
                            veth103i0
vmbr1        8000.000000000000    no
vmbr192        8000.000000000000    no
vmbr2        8000.fe2f98178fbf    no        veth102i1
                            veth103i1
vmbr20        8000.000000000000    no

Just ignore the other bridges, they are just for testing. I'm using vmbr0 and vmbr2. From the container, I can ping my own IP (like 192.168.1.2), but I can't ping 192.168.1.1 (the gateway, I guess).
 
Just ignore the other bridges, they are just for testing. I'm using vmbr0 and vmbr2. From the container, I can ping my own IP (like 192.168.1.2), but I can't ping 192.168.1.1 (the gateway, I guess).

don't known what it this 192.168.1.1 ip (it's your network, I guess ;) . But you have setup 192.168.1.254 on vmbr2, so you should be able to ping it from the container normally.
 
I can't ping 192.168.1.254 either :(

Code:
[root@maestro ~]# ping 192.168.1.254
PING 192.168.1.254 (192.168.1.254) 56(84) bytes of data.
From 192.168.1.2 icmp_seq=1 Destination Host Unreachable

What would you do, step by step, to have the internal network working? I feel like i'm missing some basic step somewhere. But I also feel like I tried everything.

Update: I'm now able to ping the gateway (192.168.1.254) from both containers, but I can't ping one container from the other, and vice-versa. What am I missing? :(
 
Last edited:
Hmmm.

I changed it to this:

upload_2016-2-11_17-35-52.png

And the other to 192.168.1.3/24.

The problem now is different. In one of the VM's I don't get any connectivity (not even the eth0 public IP that was working), and on the other, the eth1 interface seems to be disabled:

Code:
23: eth1@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 36:31:31:64:34:62 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::3431:31ff:fe64:3462/64 scope link
       valid_lft forever preferred_lft forever

Now I can't ping neither the gateway, nor myself.
 
Yes, I restarted the containers, and also the host server.

Here is the config file (I'm using CentOS):

Code:
[root@vpn03 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
BOOTPROTO=none
IPADDR=192.168.1.2
NETMASK=255.255.255.0
GATEWAY=192.168.1.254
IPV6INIT=yes
IPV6_AUTOCONF=no
DHCPV6C=yes

The other machine I can't access since it doesn't have internet connectivity and I don't have the root password right now (I'm using public key to authenticate).
 
The other machine I can't access since it doesn't have internet connectivity and I don't have the root password right now (I'm using public key to authenticate).

Just to be sure, when you talk about "the other machine", it's the other container ?
or do you have each containers on different hosts ?
 
Yes, it's the container, sorry about using the wrong expression. I have everything in the same host :/
 
Yes, it's the container, sorry about using the wrong expression. I have everything in the same host :/

ok ;)

I think we should check if arp is working correcly between container.
try to launch a ping between container in both side,

do an "arp -a", and check if you see arp entries of mac address of other container.

you can also use "tcpdump -v arp" to check if you see arp request/reply.
 
I don't see any arp from any of my containers :/ I've tried to recreate the VM's and the bridge, but isn't working :(
 
Hi,
stupid Question... is ip forwarding active? Masquerading (For Outgoing Traffic) active?

And can u try to Change your Client Network Cards to static? I see in your screen that dhcp is active.. (THis Screen from here https://forum.proxmox.com/threads/internal-network-between-containers.25962/#post-130315 )

i cant say if this a problem but i read somewere that u have to use net0 in all networkcards... you have net1 active, can you change this to net0?

This part is for vlan when i understand it right...
 
Hi,
stupid Question... is ip forwarding active? Masquerading (For Outgoing Traffic) active?

And can u try to Change your Client Network Cards to static? I see in your screen that dhcp is active.. (THis Screen from here https://forum.proxmox.com/threads/internal-network-between-containers.25962/#post-130315 )
DHCP it's active only for IPv6, which I don't care at the moment.

I don't need IP forwarding neither Masquerading, as each container has another interface with a public IP that is used for internet access. I just need the second interface with the internal network for internal communication - I need to setup an internal NFS across a few containers.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!