no internet from within container

goeldi

Renowned Member
Dec 6, 2012
28
1
68
This is on a fresh PVE 8.0.4 installation on a Hetzner Server.
So this node is in a datacenter. There are no private ip addresses involved. I mention this, because all posts I found are related to private ip addresses.

The ip addresses in this post are changed for privacy:

PVE Node has IP address 167.120.71.34/26
and gateway 167.120.71.1

I have this subnet on this node: 6.8.188.95/27
So I can use ip 6.8.188.95 to 6.8.188.124 for containers and VM.

/etc/network/interfaces on the pve node:
Code:
auto lo
iface lo inet loopback

iface enp195s0 inet manual

iface enx06ff699e5b18 inet manual

auto vmbr0
iface vmbr0 inet static
    address 167.120.71.34/26
    gateway 167.120.71.1
    bridge-ports enp195s0
# tried also "bridge-ports none"
    bridge-stp off
    bridge-fd 0

in the GUI the pve host has this network config:
NameTypeActiveAutostartVLAN awarePorts/SlavesBond ModeCIDRGatewayComment
enp195s0Net DeviceYesNoNo
enx06ff699e5b18Net DeviceNoNoNo
vmbr0Linux BridgeYesYesNoenp195s0167.12.71.34/26167.12.71.1

(I tried this also with the CIDR/Gateway addresses on the enp195s0 line and for the vmbr0 a CIDR from the active subnet)

I created a privileged CT with the standard Ubuntu 22.04 Template, nesting activated, and set this on the GUI:

Code:
Name:     eth0                IPv4:           Static
MAC addr: SO:ME:CO:DE         IPv4/CIDR:      6.8.188.100/27
Bridge:   vmbr0               Gateway (IPv4): 167.120.71.34

On the node I have all Internet access. But I cannot ping the ip address of the CT. The Firewall for the CT is deactivated.

Inside the Container I can ping its own ip address but not the node nor the gw.

After container creation I saw that there is no /etc/network/interfaces inside the CT. Although this is standard for Ubuntu, I created this file inside the CT:

Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
    address 6.8.188.100/27
    gateway 167.120.71.34

But this didn't change anything.

I work with proxmox since v2 and the old OpenVZ times, and I have two nodes running old endoflife version 6.4. I am used that such network configurations works out of the box, because there is no special routing or private ip stuff involved.
 
Last edited:
Gateway and LXC aren't part of the same subnet. Without custom routes or a router they just don't now how to reach each other?
 
Gateway and LXC aren't part of the same subnet. Without custom routes or a router they just don't now how to reach each other?
Yes, but the subnet is routed to this gateway.
e.g. on one of my other pve nodes i have this:

node enp0s31f6 (net device)
CIDR = 137.202.54.82/26
Gateway = 137.202.54.65

node vmbr0 (bridge)
CIDR = 6.8.183.82/28

CT eth0
bridge = vmbr0
CIDR = 6.8.183.90/28
Gateway = 137.202.54.82

This works perfect. I know that the vmbr0 is the router here. I tried this also on my new node to no avail.
 
Gateway IP address of an interface must belong to same subnet of it. In order to reach other networks, it is the responsibility of router(s). In this case, gateway of vmbr0 interface must belong to 6.8.183.90/28.
Yes, but the subnet is routed to this gateway.
e.g. on one of my other pve nodes i have this:

node enp0s31f6 (net device)
CIDR = 137.202.54.82/26
Gateway = 137.202.54.65

node vmbr0 (bridge)
CIDR = 6.8.183.82/28

CT eth0
bridge = vmbr0
CIDR = 6.8.183.90/28
Gateway = 137.202.54.82

This works perfect. I know that the vmbr0 is the router here. I tried this also on my new node to no avail.
 
I think my description above is a little complicated, so I present screenshots of a working and the not-working situation. I cannot see any difference between them. The working version is PVE 6.4-15 and the not-working one is v8.0.4.

Working Node config:

working-node.png

Not working Node config:

notworking-node.png
(tried with and without "VLAN aware")

Working CT config:

working-ct.png

Not working CT config:

notworking-ct.png

As you can see, Node/CT combinations work, even if the Node Gateway is not in the same range as the subnet.

The only working thing is a ping from the Node to the CT and from the CT to the Node. On the Node, internet access works perfect.
 
Gateway IP address of an interface must belong to same subnet of it. In order to reach other networks, it is the responsibility of router(s). In this case, gateway of vmbr0 interface must belong to 6.8.183.90/28.

When I try to set a gateway address (e.g. 6.8.183.91) on the vmbr0 interface, Proxmox tells me:

Parameter verification failed. (400)
gateway: Default gateway already exists on interface 'enp195s0'
 
Compare the contents of /etc/network/interfaces on both server. Maybe you had set up a routed configuration (see here for an example: https://pve.proxmox.com/wiki/Network_Configuration#sysadmin_network_routed ) for the PVE6 and don't remember it and now you try to use the default bridged configuration for PVE8?

That is advanced and you won't see such changes in the webUI, which only covers bridged configurations, so screenshots won't help much.
 
Last edited:
When I try to set a gateway address (e.g. 6.8.183.91) on the vmbr0 interface, Proxmox tells me:
Hello,

Is it possible that you already have a default gateway? You can have at most a single gateway in `/etc/network/interfaces`.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!