[SOLVED] Isolate Containers from One Another

nbeam

New Member
Apr 6, 2016
8
1
1
38
So here is the setup I have.

A single, public, proxmox server running version 4.1-22.

It has a single NIC with a single public IP. And that is all I can get.

So naturally I am using NAT behind that public IP for my containers.

So on the physical host, Eth0 --> Vmbr0 /w public IP.

In addition I have setup new bridges on the host. One for each container network. For example.

vmbr150 - ip 10.150.150.254, network 10.150.150.0/24
vmbr50 - ip 10.50.50.254, network 10.50.50.0/24
etc. etc.

Then I have containers with their Eth0 tied to vmbr150 and an IP address of 10.150.150.XX and gateway of 10.150.150.254. The host has NAT rules allowing the containers to send traffic out of the network.

And then container in the 50 network setup in similar fashion.

From my host IPtables:

Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 10.50.50.0/24 anywhere
MASQUERADE all -- 10.150.150.0/24 anywhere
MASQUERADE all -- 10.200.200.0/24 anywhere

This works pretty much perfectly all around. Except...

I was surprised when I was working in one of my containers in the 150 network that I could ping, SSH to, etc. a container in the .50 network.

So apparently the host is routing traffic from one network segment to another. I have tried fiddling with firewalls but that seemed to make no difference (when making changes at the NODE firewall level) or broke things (if I enabled the firewall on the Container's interface, even without any rules applied, it seemed to just break all networking). I started to fuss with VLANs but that just broke things. I am open to trying either though again if someone has a solution.

The goal is to more fully insulate the containers from on another so there isn't any cross-talk across internal network segments on the node.

Any ideas on how I could achieve this?

UPDATE:
BTW - I did a good deal of searching google first without much luck :( - I am usually pretty good about researching and figuring things out on my own but this had me pretty stumped....

Also, here is an example of the NAT rule I have tied to the vmbr50 interface on the physical host:
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.50.50.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.50.50.0/24' -o vmbr0 -j MASQUERADE
 
Face Palm....

Had I only read the wiki a little more closely...

https://pve.proxmox.com/wiki/Network_Model#Routed_Configuration

and this:

http://lartc.org/howto/lartc.bridging.proxy-arp.html

I just switched over to using a "routed model" and that accomplished EXACTLY what I wanted...

Sorry for the post.

PS - For anyone else that comes across this. When using a "routed config" - the gateway for your containers is the same as the gateway for your public IP. This threw me a bit at first as I am not used to seeing a gateway address outside of the network range of the machine but that is exactly what the routed config if all about.
 
Face Palm....

Had I only read the wiki a little more closely...

https://pve.proxmox.com/wiki/Network_Model#Routed_Configuration

and this:

http://lartc.org/howto/lartc.bridging.proxy-arp.html

I just switched over to using a "routed model" and that accomplished EXACTLY what I wanted...

Sorry for the post.

PS - For anyone else that comes across this. When using a "routed config" - the gateway for your containers is the same as the gateway for your public IP. This threw me a bit at first as I am not used to seeing a gateway address outside of the network range of the machine but that is exactly what the routed config if all about.

Actually that was utterly incorrect. This issue still isn't solved for me. I thought it was but that was because I had apparently pushed the gateway update to the wrong VM (it wasn't the one I was testing from).

After trying both gateway IP addresses (the public gateway and the 10.150.150.254 address assigned to the vmbr150 on the host) I still have no network connectivity to the outside world when using a routed config. Any help appreciated.
 
Okay, final update I think... and I believe this is solved?

Apparently the trick was to use IPtables on the physical host to deny communication between interfaces. So... I ended up doing the following:

1. Going back to my standard NAT'd setup as explained in OP above. So internet access is working for all containers again. However, each network can still talk with every other network. This is what I was trying to prevent.

2. Then on the host, manually configuring IP tables to DROP all traffic between specific interfaces and setting up rules in both directions. Example:

First setup default rules allowing connections that are already established to remain established.
iptables -A FORWARD -m state --state ESTABLISHED -j ACCEPT
iptables -A FORWARD -m state --state RELATED -j ACCEPT

NEXT:

Setup rules dropping all traffic between interfaces:
iptables -A FORWARD -i vmbr150 -o vmbr50 -j DROP
iptables -A FORWARD -i vmbr50 -o vmbr150 -j DROP

Bingo - that did it. I was trying to accomplish by using the GUI Firewall in the Proxmox control panel but it never seemed to work. I have a feeling this is because the nics on my container didn't have the firewall enabled (AND, I could NOT enable the firewall without my other NAT stuff just plain not working... so that wasn't an option).

This goes back to my premise that proxmox firewall is still a bit quirky. All-in-all I will end up adding these as post up actions in my /etc/network/interfaces file and I should be good to go through reboots.

I hope this is a help to someone else :)

All thanks goes to this post here: http://ubuntuforums.org/showthread.php?t=1719262
 
One further update to this whole thing... If you want to drop ALL traffic BETWEEN containers (where each is using its own interface) it looks something like this (this gets run on the proxmox host):

iptables -A FORWARD -i vmbr150 ! -o vmbr0 -j DROP

This works (and the preceding post I made) because the Proxmox host machine is essentially a router between the various virtual interfaces you have setup.

So if you have vmbr100 and it wants to talk to vmbr200 then the Proxmox host must FORWARD the traffic from vmbr100 to vmbr200.

The above rule includes a "shebang" which in linux/bash/commandline speak is used to negate or "invert" (that the language the IPtables manual uses) something.

So to explain the above command..

iptables - at the bottom of the FORWARD rule chain place a rule that says, for traffic coming FROM vmbr150 that IS NOT HEADED FOR vmbr0, block/drop it.

This means that if you are using NAT for internet connectivity between a shared public IP for my your containers, THAT traffic will still be allowed out because it is headed for vmbr0.

However, if your container is trying to talk to another container, it will be blocked.

So, at the end of the day, here is an example from the end of an interface entry in my /etc/network/interfaces file on my host....

post-up iptables -t nat -A POSTROUTING -s '10.150.150.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.150.150.0/24' -o vmbr0 -j MASQUERADE
post-up iptables -A FORWARD -i vmbr150 ! -o vmbr0 -j DROP
post-down iptables -D FORWARD -i vmbr150 ! -o vmbr0 -j DROP

Hopefully this helps someone else struggling with proper IPtables configuration.
 
  • Like
Reactions: speaker

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!