Internal network between containers

Hmm curios, i have all my vps configured with public IP's and all use net0.
All configs are the same like yours and i can Ping each other VPS that are in same subnet.

i have at Host eth0 + vmbr0 (subnet 192.168.0.0/16) + vmbr1(subnet 193.168.0.0/16 ip forwarding active
2 containers with vmbr1 as bridge and 2 with vmbr0

All Conatiners they use vmbr0 can ping other Containers with this bridge but not the other Containers with vmbr1

All Network Cards in the Container are configured without vlan and with net0 for private subnet and public IP
 
Since I'm using OVH and they use IP failover, I have the eth0 + vmbr0 setup this way:
upload_2016-2-15_10-3-48.png
 
Okay, i didnt unterstand why you use this setup. you can use private and public ips with net0. It works great at all my hosts.
Only thing i have to do is to setup a route for public ip to the right Bridge and configure one networkcard with public IP and as Gateway the IP from the used bridge and one Network Card with private IP and as Gateway the Bridge IP, too. Have u tryed Aliases like eth0:0 for private IP's?
 
so for all....

two errors i changed.
He used for both vps the same ip and wrong CIDR

Proxmox 4 didnt have a test if the IP is configured in other containers (Proxmox 3 had this test i remember about this)
 
as nixmomo said, he helped me getting internal network. But now I'm having a weird problem with the default gateway.

I have two NICs: eth0 with my public IP and my public gateway, and eth1 with my private IP and my private gateway. The problem is that after a reboot, I lost all the external connectivity. I suspected it could be routing:

Code:
[root@proxy2 ~]# ip route list
default via 192.168.1.254 dev eth0
169.254.0.0/16 dev eth1  scope link  metric 1018
169.254.0.0/16 dev eth0  scope link  metric 1020
176.xxx.xxx.254 dev eth1  scope link
192.168.1.0/24 dev eth0  proto kernel  scope link  src 192.168.1.5

Yup, the default gateway is the private one, instead of the public one - 176.xxx.xxx.254. If I run
Code:
ip route replace default via 176.xxx.xxx.254
, it will start working.

I know that I can change the network config file on each container so I set the correct gateway, but how can I make this automatically using proxmox?
 
If any one need i figure out this solution:

add in the /etc/network/interfaces of proxmox host:

auto vmbr2
iface vmbr2 inet static
address 172.16.25.254
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/conf/vmbr2/proxy_arp
#LAN

Then in the LXC containers, add an eth1 network interface briged with the vmbr2:

auto eth1
iface eth1 inet static
address 172.16.25.10
netmask 255.255.255.0

I can ping betwean the LXC containers.
 
I did not read the entire post but would like to share my setup: All containers, KVMs, and PVEs are on a delicated VLAN.


To achieve that, I had to remove linux bridge and then install openvswitch on each PVE node.
Reconfigure PVE so that vmbr0 is a openswitch bridge and configured to use switch port of a physical NIC.

Next, on each virtual machine, create nic and set the vlan ID set to ie. 11.

It is just a slick setup because essentially you're utilizing 802.11q. Traffic from the dedicated vlan is completed isolated from your house network. Also make sure that you configure your router so that traffic from lan can be routed to vlan 11.

good luck
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!