Docker container inside proxmox VM cannot access the internet

kartr

New Member
Aug 9, 2024
5
0
1
I have a server from OVH that I am running Proxmox 8 on and a few VMs with Ubuntu 22 under it.

The OVH server requires custom config with Proxmox if you want to use multiple IPs as explained here (in the link below, look at the part about ADVANCE servers, it is the lesser complicated configuration from the rest and what I am using)
https://github.com/ovh/docs/blob/9d...rvers/proxmox-network-HG-Scale/guide.fr-fr.md

As explained in the guide here, the /etc/network/interfaces file on my hypervisor looks like this

Code:
auto lo
iface lo inet loopback

auto enp8s0f0np0
iface enp8s0f0np0 inet static
    address PUB_IP_DEDICATED_SERVER/32
    gateway 100.64.0.1

auto vmbr0
iface vmbr0 inet static
    address 192.168.0.1/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0
    up ip route add ADDITIONAL_IP/32 dev $IFACE
    up ip route add ADDITIONAL_IP_BLOCK/28 dev $IFACE

And the netplan config on the VMs looks like this

Code:
network:
  version: 2
  ethernets:
    eth0:
      addresses:
        - 192.168.0.3/24
        - ADDITIONAL_IP/32
      routes:
        - to: default
          via: 192.168.0.1
          # Pour que les paquets à destination d'Internet aient comme source
          # l'IP publique et non l'IP privée 192.168.0.3
          from: ADDITIONAL_IP

In general, hypervisor and VMs both work normally but the problem arises with docker containers installed on the VM. They have no internet connectivity in their default bridge mode. If I run the dockers in the host mode, they have internet access but I do not want to run docker in host mode because of our setup.

Could this be related to IP routing? Expecting some help from the networking community here!
 
Hi kartr, have you found a solution? I have the exact same problem.

After various troubleshooting the docker0 interface is the problem, it cannot receive packets from outside.
 
yes i'm using OVH and i have an ADVANCE dedicated server, are you also working with an ADVANCE one?
Yes ADVANCE, too. The proxmox has a VM and we have a docker container inside it, with bridged network. We can access the container from the internet, but not the other way around. From the container we can ping the VM and even the proxmox host, but nothing further.We may have to set up something else either in the container or the VM.
 
Yes ADVANCE, too. The proxmox has a VM and we have a docker container inside it, with bridged network. We can access the container from the internet, but not the other way around. From the container we can ping the VM and even the proxmox host, but nothing further.We may have to set up something else either in the container or the VM.
Same exact problem, i have tried modifying the iptables, ufw rules, docker daemon.js file but nothing works.

I don't know if it is a problem related to the netplan configuration.

OVH support could't help me because they only guide you on machine configuration and not other problems regarding the OS you have installed
 
Hi again.

We found a workaround, I think it may work for you.

In the OVH documentation the virtual machines are assigned 2 IP:
- one on a private network, for example 192.168.0.7
- the ADDITIONAL_IP

The issue is that when you configure a virtual machine, you set up this piece of code:
Code:
      routes:
        - to: default
          via: 192.168.0.1
          from: ADDITIONAL_IP

This tells the machine that the default gateway for any unknown destination is 192.168.0.1 (the LAN IP of the PROXMOX host) and when you want to communicate with it the source ip (from) should be ADDITIONAL_IP.

But this is only respected for packets created on the VM operating system. Not externally created packets (such as the ones from the Docker containers, that have a different OS).

So we don't know why but docker is deciding that if you want to communicate with 192.168.0.1 the source IP should be the one of that subnet, and it's sending its packets with 192.168.0.7 as the src IP. When they reach the internet, the destination hosts don't know how to respond.

In order to fix this, we assigned a public IP to our Proxmox host. We had a /28 block but it probably works even for individual IPs. As an example we'll use the IPs here: https://help.ovhcloud.com/csm/es-es...ticle=KB0043912#configure-a-usable-ip-address
  • we would assign 46.105.135.110 to the proxmox host.
  • we would assign 46.105.135.101 to our virtual machine
In order to have a second IP in the vmbr0 bridge we added (not replaced) this block to /etc/network/interfaces in the proxmox host:

Code:
auto vmbr0
iface vmbr0 inet static
        address 46.105.135.110/28
followed by systemctl restart networking.service

Then in our VM (that are on Ubuntu) we set up netplan like this /etc/netplan/01-eth0.yaml:
Code:
network:
  version: 2
  ethernets:
    eth0:
      addresses:
        - 46.105.135.101/28
        # in our case the next line is optional and can be removed
        - 192.168.0.7/24
      nameservers:
        - [1.1.1.1]
      routes:
        - to: default
          via: 46.105.135.110
followed by netplan apply

The main change is the via property, that now uses the public IP of the proxmox host.

We tested this by pinging google from the VM and one of the docker containers (the second would fail with the old configuration):
Code:
ping 8.8.8.8
docker exec -it container-name ping 8.8.8.8

Edit: I forgot to add that I enabled ip forwarding in both the proxmox host and the VM, but I don't think it's relevant:

Add the following lines to /etc/sysctl.conf:

Code:
# Enable ip_forward
net.ipv4.ip_forward = 1
 
Last edited:
Hi, everything is very clear. I tried with the additional single IPs but apparently if they are not on the same subnet it is not possible, surely there is a solution but the use of the block also simplifies us for future requirements.

Thank you so much for sharing with me your solution and for making it very clear.
 
Here is the solution for everyone looking.

My hypervisor has a local IP 192.168.0.1 and then for each VM, I use 192.168.0.2, 192.168.0.3 and so on.

On the Proxmox hypervisor, use this config

Code:
auto lo
iface lo inet loopback

auto enp8s0f0np0
iface enp8s0f0np0 inet static
    address PUBLIC_IP_OF_THE_HYPERVISOR/32
    gateway 100.64.0.1
    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    post-up echo 1 > /proc/sys/net/ipv4/conf/enp8s0f0np0/proxy_arp
    post-up iptables -t nat -A POSTROUTING -s 192.168.0.2/32 -j SNAT --to-source PUBLIC_IP_OF_VM1
    post-up iptables -t nat -A POSTROUTING -s 192.168.0.3/32 -j SNAT --to-source PUBLIC_IP_OF_VM2


iface enp8s0f0np0 inet6 static
    address 2001:41d0:24c:bf00::/56
    gateway fe80::1

auto vmbr0
iface vmbr0 inet static
        # Define a private IP, it should not overlap your existing private networks on the vrack for example
        address 192.168.0.1/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        up ip route add PUBLIC_IP_OF_VM1/32 dev vmbr0
        up ip route add PUBLIC_IP_OF_VM2/32 dev vmbr0

On each VM, use this config

Code:
# This is the network config written by 'subiquity'
network:
  version: 2
  ethernets:
    ens18:
      dhcp4: false
      addresses:
        - 192.168.0.2/24
        - PUBLIC_IP_OF_VM1/32
      nameservers:
        addresses: [1.1.1.1, 8.8.8.8]
      routes:
        - to: default
          via: 192.168.0.1
          from: PUBLIC_IP_OF_VM1


Remember to power off the VMs, restart networking on the hypervisor, and then start the VMs again.
 
Oh it's clever. So you enable NAT at the hypervisor and then change the source address for every outgoing packet that wants to go through the internet (enp8s0f0np0) interface with an internal IP (192.168.0.X) to its public IP.

When the packet reaches its destination and they want to respond, they can use that public IP because... well... it's public. So the response packet reaches the correct VM through normal routing and the usual NAT rules for incoming packets aren't needed.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!