Container network running but no external access

JBB

Member
I've been running VMs on PVE fine, but not tried using containers before.

When I start up a container (using a static IPv6 WAN address and static IPv4 LAN), the networking appears to be OK inside the container, but I can't ping anything other than the gateway and the DNS server IP's. And the container can't resolve any names either. I can connect to other hosts on the LAN though.

Is there anything else in the networking I need to configure for this to work? Or perhaps this is an upstream problem. Does my ISP not like the way the container is sending its packets?

Thanks for any help.

Here's my host config (just noticed there's no DNS servers on the IPv4 interface. Hm. Don't suppose that matters):

Code:
auto vmbr0
iface vmbr0 inet static
        address 185.73.99.98 
        netmask 255.255.252.0
        gateway 185.73.0.2 
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

iface vmbr0 inet6 static
        address 2001:BA8:0:2C22::D1
        netmask 64
        gateway 2001:ba8:0:2c21::1
        dns-nameservers 2001:ba8:0:2c01:: 2001:ba8:0:2c02::

auto vmbr1
iface vmbr1 inet static
        address 10.10.10.1
        netmask 255.255.255.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0
The interfaces on the container:

Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
145: eth0@if146: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 86:8c:8a:0d:xx:xx brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 2001:ba8:0:xx::xx/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::848c:8aff:xx:xx/64 scope link 
       valid_lft forever preferred_lft forever
150: eth1@if151: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:c7:69:e9:4f:7a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.10.202/32 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::fcc7:69ff:fee9:4f7a/64 scope link 
       valid_lft forever preferred_lft forever
The routes on the container look like this:

Code:
2001:ba8:0:2c21::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth1 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via 2001:ba8:0:2c21::1 dev eth0 proto static metric 1024 pref medium
Code:
default via 10.10.10.202 dev eth1 proto static 
10.10.10.202 dev eth1 proto static scope link
 

Richard

Proxmox Staff Member
Staff member
Mar 6, 2015
704
25
28
Austria
I've been running VMs on PVE fine, but not tried using containers before.

When I start up a container (using a static IPv6 WAN address and static IPv4 LAN), the networking appears to be OK inside the container, but I can't ping anything other than the gateway and the DNS server IP's. And the container can't resolve any names either. I can connect to other hosts on the LAN though.

Is there anything else in the networking I need to configure for this to work? Or perhaps this is an upstream problem. Does my ISP not like the way the container is sending its packets?


Probably - AFAICS you extended you IPv6 subnet to the containers directly, i.e. the container interfaces' MAC addresses are seen at the (IPS' s?) router - if your ISP is also your hoster it will not allow other MAC address than those which are well known by him.

To figure this out: follow your packets via tcpdump from container to host and further to it's interface - if the packets are sent correctly to the router but there is no answer you have the case I've described.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!