Add several public IP addresses to a working MASQUERADING setup?

wowo

Member
May 14, 2024
30
7
8
Hey everyone,

this setup runs in a datacenter on a public IP 37.x.x.86/26.
The server has got one physical interface eth0.
The current setup works nicely with a masqueraded private subnet 10.10.10.0/24 and the single public IP.

I now bought two additional IP adresses, because i need public IPs for one of our services.
I could get MAC-adresses for these additional IPs as well if needed.

My question would be, how to add the new public IPs (37.x.x.85 and 37.x.x.84) to the existing setup in the most simple way.
As this is a production machine with many running containers, I would prefer to not change too much of the working setup.

What would be your suggestions?

Thank you very much!

wowo

Code:
# /etc/network/interfaces

auto lo
iface lo inet loopback

# Physical interface
auto eth0
iface eth0 inet static
        address  37.x.x.86/26
        gateway  37.x.x.1

        # Del old iptables
        post-up         iptables -F
        post-up         iptables -t raw -F
        post-up         iptables -t nat -F
       

        # Set defaults
        post-up iptables -P FORWARD ACCEPT
        post-up iptables -P INPUT DROP
        post-up iptables -P OUTPUT ACCEPT

        # Allow lo
        post-up iptables -A INPUT -i lo -j ACCEPT
        post up iptables -A OUTPUT -o lo -j ACCEPT

        # Allow contrack
        post-up iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
        post-up iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED -j ACCEPT


        # Allow access to services on host

        post-up         iptables -A INPUT -i eth0 -p tcp -m tcp --dport 22 -j ACCEPT        
        # more ports snipped for better readability


        #Portmappings for Containers in subnet 10.10.10.0/24
        post-up       iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 12345 -j DNAT --to-destination 10.10.10.3:12345
        #more portmappings snipped for better readability



#Bridge for the private 10.10.10.0/24 subnet

auto vmbr0
iface vmbr0 inet static
        address  10.10.10.100/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0


        #Masquerading for the private subnet

        post-up   echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
        post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
        post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
 
Last edited:
How do you want to utilize the new IPs? Do you want specific containers to use specific IPs?
 
  • Like
Reactions: wowo
How do you want to utilize the new IPs? Do you want specific containers to use specific IPs?

The new public IPs should be available to a single Debian container with two virtual NICs.
But I am also interested on how to do it for a single IP per container for the future.

I played around for some time with the "routed setup" from the official documentation, but sadly wasn't able to make it work.

Thank you very much!
 
Considering that you want to change as little as possible, since this is a production machine, you can configure the IPs on the host as aliases for the eth0 interface and do 1:1 NAT to the respective containers.

The cleaner (and preferred) solution would be to have one bridge for the private subnet (as you do already), and one bridge for the public subnet (would require creating a new bridge that enslaves eth0) and then give each container a network card on the bridge with the IP configured. But that would require some bigger changes in your current network configuration.
 
  • Like
Reactions: wowo
Thank you for the insight! Because more additional IPs might follow in the future, I guess I will try to do it "the right way".
It's difficult to compare work in the future with work today, but unfortunately I've often had bad experiences with quick and dirty stuff later on ;)

Sadly I am no networking guru so some questions still remain.

- Would enslaving eth0 mean, that I would have to move the existing main IP of the machine to another bridge away from eth0 and change masquerading, firewalling, port mappings, etc for the existing containers accordingly?
I frankly don't understand what happens to the main IP when the interface gets ensalved.

- We are using a datacenter that doesn't allows multiple IPs with the same MAC, but they offer to order MAC-adresses for additional floating IPs. Would I need to do that? As I understand, I would configure the IPs/MACs in the containers directly, but wouldn't they seem to have the MAC of eth0 to the outside network, because they all must pass eth0?
 
- Would enslaving eth0 mean, that I would have to move the existing main IP of the machine to another bridge away from eth0 and change masquerading, firewalling, port mappings, etc for the existing containers accordingly?
I frankly don't understand what happens to the main IP when the interface gets ensalved.
No, you can still leave it at eth0 if you do not want internal connectivity between VMs and Host (if you do, you would have to configure the IP on the new bridge instead). Bridging the interface is essentially creating a virtual switch where traffic from all attached interfaces gets either forwarded to other members of the bridge or sent out via the bridge port, if its destination is for a computer in the outside network.

- We are using a datacenter that doesn't allows multiple IPs with the same MAC, but they offer to order MAC-adresses for additional floating IPs. Would I need to do that? As I understand, I would configure the IPs/MACs in the containers directly, but wouldn't they seem to have the MAC of eth0 to the outside network, because they all must pass eth0?
If you are going for the bridged solution then eth0 will just forward the packets received from the containers / VMs unchanged, so they will still have the MAC address of the containers, not the MAC address of the host.
 
Hey Stefan, thank you very much for your help. After a bit of experimenting I made it work.
I was unable to make it work with the IP being left on eth0. GUI wouldn't allow me to create a bridge with no IP to eth0.
Trying so set the IP of the new bridge to 0.0.0.0 in config and setting the bridge_port to eth0 broke networking.

I finaly did this:

- Moved IP from the interface to vmbr0
- Renamed my old vmbr0 to vmbr1
- Ordered MAC addresses for my additional floating IPs from my datacenter
- Changed my firewalling rules (mostly changing eth0 to vmbr0)

- Machines with floating public IPs are now attached to vmbr0.
- Floating IP address and the ordered MAC(!) are configured in the networking config of the containers. Gateway is the gateway provided by the datacenter for the floating IP

- Machines in the private Network are now attached to vmbr1

Thanks again for the help!

Be careful when using this in your datacenter. You need to get the MAC-addresses for your floating IPs.
Otherwise the datacenter probably would see multiple MAC-addresses on one interface as abuse and block your service.

This is my working config. Hope it might help someone. Consumed quiet a bit of my time, but I learned a lot.

Code:
auto lo
iface lo inet loopback

# Physical interface
iface eth0 inet manual


# Bridge for the hosts IP and additional public floating IPs

auto vmbr0
iface vmbr0 inet static

# This is the public IP of the host
address  37.x.x.86/26
gateway  37.x.x.1
bridge-ports eth0
bridge-stp off
bridge-fd 0

# Clean up old rules
post-up         iptables -F
post-up         iptables -t raw -F
post-up         iptables -t nat -F

#Set default policies
post-up iptables -P FORWARD ACCEPT
post-up iptables -P INPUT DROP
post-up iptables -P OUTPUT ACCEPT

# Allow Localhost
post-up iptables -A INPUT -i lo -j ACCEPT
post-up iptables -A OUTPUT -o lo -j ACCEPT

# Allow pings to host and floating IPs
post-up iptables -A INPUT -i vmbr0 -p icmp -j ACCEPT

# Allow established connections
post-up iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
post-up iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED -j ACCEPT

#Allow Incoming Ports on Host

#SSH on Host
post-up         iptables -A INPUT -i vmbr0 -p tcp -m tcp --dport 22 -j ACCEPT          

#Proxmox GUI on Host
post-up         iptables -A INPUT -i vmbr0 -p tcp -m tcp --dport 8006 -j ACCEPT      


# Do Portmappings for the private 10.10.10.10 subnet
post-up       iptables -A PREROUTING -t nat -i vmbr0 -p tcp --dport 12345 -j DNAT --to-destination 10.10.10.3:12345


# Bridge for the private 10.10.10.10 subnet

auto vmbr1

iface vmbr1 inet static
address  10.10.10.100/24
bridge-ports none
bridge-stp off
bridge-fd 0


# Masquerade Container bridged LAN 10.10.10.0/24 for internet connectivity in containers
post-up   echo 1 > /proc/sys/net/ipv4/ip_forward
post-up   iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o vmbr0 -j MASQUERADE
post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
 
Last edited:
OK, there is still a problem.
The additional public IPs work nicely when being attached to different containers via vmbr0.
One IP per container.

For my special use case I need to attach two public IPs to a *single* container.
I added two interfaces to the container.
eth0 IP1 MAC1
eth1 IP2 MAC2

IP1 can be reached from the internet.
IP2 can't be reached from the internet.
IP1 + IP2 can be reached from the host.

When I move IP2 to a different container, it works, so it doesn't seem to be a routing issue with my datacenter.

Is there something I would need to setup in Proxmox to make it work?
 
Last edited:
Are they in the same subnet? You can't really have two different interfaces in the same subnet, things will get messed up. You would have to add the IP as alias to the first interface from inside the container then.
 
Yes they are in the same subnet. I debugged this further.

Sniffed traffic by MAC-address with tcpdump on the host.

Outgoing traffic from NIC1 and NIC2 in the container are sent out with the MAC-address of NIC 1 when sniffed on the host.
This triggers the datacenters abuse protection. Datacenters switches prohibit routing.

Thanks again for helping out, I will try your suggestion.
 
Ah, I did not consider that you would need different MAC addresses for the different IPs - is this a hard requirement or something you can configure? Since with configuring it as alias on the same NIC, you would have the same MAC address for both IPs.
 
This is a hard requirement. The two IPs need to be public, because the underlying service needs them for it's functionality (STUN).
Datacenter does not allow the same MAC-address for different IPs. Their switches block traffic if they detect this.
 
Last edited:
Then it might make sense to configure them as /32 so they do not overlap and packets get routed correctly. If you additionally want to use a gateway in this subnet you would then have to add a route to the gateway manually via one of those interfaces. Setting them as /32 would of course remove local connectivity in this subnet and everything in that subnet would get routed via the default gateway as well. I think it might suffice to configure only the second interface as /32, but I would have to try. I think it should work though, since Linux always takes the most specific route, but I'm not sure if there would be any problems since you then still have overlapping subnets so I'd recommend configuring them both as /32.
 
Ok, i tried that. Configured both IPs from the GUI with /32. Proxmox autogenerated this config:




Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 37.x.x.84/32
# --- BEGIN PVE ---
post-up ip route add 37.x.x.65 dev eth0
post-up ip route add default via 37.x.x.65 dev eth0
pre-down ip route del default via 37.x.x.65 dev eth0
pre-down ip route del 37.x.x.65 dev eth0
# --- END PVE ---

auto eth1
iface eth1 inet static
address 37.x.x.85/32
# --- BEGIN PVE ---
post-up ip route add 37.x.x.65 dev eth1
post-up ip route add default via 37.x.x.65 dev eth1
pre-down ip route del default via 37.x.x.65 dev eth1
pre-down ip route del 37.x.x.65 dev eth1
# --- END PVE ---

It doesn't work. Traffic from eth1 is still seen with eth0s MAC on the host.
 
did you configure the same gateway for both interfaces? you would need to remove it from one interface then, but that shouldn't affect your issue afaict.

Can you post the routes from inside the container afterwards?
Code:
ip r
 
Last edited:
OK, removed GW from eth1.

Sniffing eth0 containers MAC on the host:
generating traffic on eth0: there is traffic
generating traffic on eth1: nothing

Sniffing eth1 containers MAC on the host:
generating traffic on eth0: nothing
generating traffic on eth1: nothing

Code:
# ip r
default via 37.x.x.65 dev eth0
37.x.x.65 dev eth0 scope link


# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0@if146: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:x:x:x:x:7c brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 37.x.x.84/32 scope global eth0
valid_lft forever preferred_lft forever
3: eth1@if150: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:x:x:x:x:a8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 37.x.x.85/32 scope global eth1
valid_lft forever preferred_lft forever
 
Last edited:
ah sorry, of course this cannot work since everything is going via the gateway which is sent via eth0. I'm afraid you will have to use a VRF, at least I don't see any other way.
 
ah sorry, of course this cannot work since everything is going via the gateway which is sent via eth0. I'm afraid you will have to use a VRF, at least I don't see any other way.

No need to be sorry, I really appreciate your help! while searching for solutions i stumbled upon "arptables". The equivalent to iptables for ARP. They allow to modify MAC-addresses in live traffic. Frankly I have no idea if this could be another way to solve the problem? Do you have experience with them? The examples I have seen looked doable.

As I have no experience at all with VRF and it looks complex at first glance, it frightens me off a bit.
 
yes, you might be able to mangle packets that way, but I don't have any experience with this so I'm not much of help there.

Can you maybe also tell me why you need to have two public IPs for the same container? Is it not possible to split the applications into two containers?
 
This is for a STUN server. STUN can be used for VoIP peers to directly connect to each other, even if both are behind complicated NAT firewalls.
VoIP-Clients which can't connect directly first connect to the STUN server to detect their own respective NAT setups and other stuff. Some more magic is involved afterwards to connect the peers directly, even if both are behind NAT.

For this to work the STUN-deamon needs to bind to two different interfaces on the same host.

Details here: https://stackoverflow.com/questions...erver-needs-two-different-public-ip-addresses
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!