Network communication between containers on different hardware nodes

jmartin

Member
Mar 17, 2009
40
0
6
I have two servers running Proxmox setup as a Cluster. eth0 is configured as bridge device (default setup), eth1 used as a direct link between the cluster nodes. /etc/network/interfaces looks like this:
Code:
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

auto eth1
iface eth1 inet static
        address  <ip-of-node-1>
        netmask  255.255.255.255
        pointopoint <ip-of-node-2>

auto vmbr0
iface vmbr0 inet static
        address  <ip-of-node-1>
        netmask  255.255.255.0
        gateway  <default-gateway>
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
The file is identical on both nodes, with the ip numbers reversed.

I'm unable to ping containers running on node-1 from node-2 and vice versa. This can be fixed via (on node-2):
Code:
  ip route add <container-on-node-1> via <node-1>
However, even with this fix, I cannot ping containers running on node-1 from containers running on node-2 and vice versa.

Do I need to setup another bridge device on eth1 to fix this? Or is there something else I need to fix?
 
Hi,
proxmox use default the vmbr0 for cluster-communication. Why you do not use something like this:
vmbr0 - bridged to eth1, directly connected to the other cluster node (with an ip, like 192.168.111.1, and 192.168.111.2 on the second node).
vmbr1 - bridged to eth0 - with a "normal" IP - the guests must use vmbr1 and all should be able to ping each other (but you use eth0 for that - eth1 is only used for syncing and so on).

Udo
 
Hi,
proxmox use default the vmbr0 for cluster-communication. Why you do not use something like this:
vmbr0 - bridged to eth1, directly connected to the other cluster node (with an ip, like 192.168.111.1, and 192.168.111.2 on the second node).
vmbr1 - bridged to eth0 - with a "normal" IP - the guests must use vmbr1 and all should be able to ping each other (but you use eth0 for that - eth1 is only used for syncing and so on).

I understand the setup you propose would make the cluster communication more reliable (going through the direct link on eth1 rather than through the wider network on eth0).

But would that actually solve my problem? The only difference I see with respect to the communication between the containers is that they will go through vmbr1 instead of through vmbr0, while vmbr1 will be connected to eth0 just as vmbr0 is now.
 
I decided not to open a new thread but write in this one since the topic seems related.

So, I have two nodes with two NICs. Eth0 goes on the internet and eth1 goes on switch on both nodes. Also, eth0 bridges vmbr0 and eth1 bridges vmbr1.

If I put two containers on the same node and add interface eth0 to bridge vmbr1 on both CTs and set IPs in the same range, I can ping CT to CT. If I move one CT on second node (so, one CT per node), ping does not go thru.

Any ideas?
 
I decided not to open a new thread but write in this one since the topic seems related.

So, I have two nodes with two NICs. Eth0 goes on the internet and eth1 goes on switch on both nodes. Also, eth0 bridges vmbr0 and eth1 bridges vmbr1.

If I put two containers on the same node and add interface eth0 to bridge vmbr1 on both CTs and set IPs in the same range, I can ping CT to CT. If I move one CT on second node (so, one CT per node), ping does not go thru.

Any ideas?

I have the same problem on my configuration :confused:
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!