Container of Node has no internet

ciarandwi

New Member
Aug 22, 2024
8
2
3
Please note: I'm a complete beginner when it comes to networking. Im a frontend/backend dev, not a network engineer so all this is very confusing to myself

As the subject says, we have a container that has no internet, pinging anything results in 100% packet loss.

Approximately a month ago, all of our interfaces went down and had to do reinstate every single one of them back up which was done with some help of a senior (somebody who knows what they're doing) however, we forgot to test one of the containers to see if it had internet, we just checked to see if the website was back up and running.

I've now tried to deploy and failed on trying to pull the code from git which would be expected when your container has no internet.

I'm just trying to figure out how I can rectify this so the container has a internet connection.

I think we've configured like this, Node has internet via a bridge called vmbr1. Bridge is connected from the node to container.
Code:
ip a
...
7: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ac: xx:xx:xx:xx:8b brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.2/20 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:xxxx:xx8b/64 scope link
       valid_lft forever preferred_lft forever

(I can ping google.com from here)

Container has 2 connections, one is a lo and an eth0 connection from the looks

Code:
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
31: eth0@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:48 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.0.189/24 brd 192.168.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:xxxx:e048/64 scope link
       valid_lft forever preferred_lft forever

Here's the /etc/network/interfaces

Code:
cat /etc/network/interfaces

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

auto vmbr0
iface vmbr0 inet static
    address  SERVER IP
    netmask  255.255.255.0
    gateway  SERVER GATEWAY
    bridge-stp off
    bridge-fd 0

auto vmbr1
iface vmbr1 inet static
    address  192.168.1.2
    netmask  255.255.240.0
    bridge-stp off
    bridge-fd 0
    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    post-up   iptables -t nat -A POSTROUTING -s '192.168.0.0/24' -o vmbr0 -j MASQUERADE
    post-down iptables -t nat -D POSTROUTING -s '192.168.0.0/24' -o vmbr0 -j MASQUERADE

I do a log (.bash_history) from the server of all the commands we ran to set up all the bridges and connections back up.

If any other information is required, please let me know
 
Last edited:
Managed to resolve this by simply moving the server to a new server. I could never fix the root cause.
 
Managed to resolve this by simply moving the server to a new server. I could never fix the root cause.
ahahahah xD
"netmask 255.255.240.0" different of "/24" iptables rules. Convert your /24 for corresponding with your netmask, or modify your netmask to 255.255.255.0 for being accepted from fw rules.
 
ahahahah xD
"netmask 255.255.240.0" different of "/24" iptables rules. Convert your /24 for corresponding with your netmask, or modify your netmask to 255.255.255.0 for being accepted from fw rules.
Of the top of my head, CIDR /24 is 255.255.255.0 and 255.255.240.0 would be CIDR /20?
 
  • Like
Reactions: Pifouney

Check IP Configuration​

Ensure that the container's eth0 interface is correctly configured. It should have an IP address in the same subnet as vmbr1. You have 192.168.0.189/24 for eth0, while vmbr1 is 192.168.1.2/20. This may cause routing issues.

Update Network Interfaces​

Modify your /etc/network/interfaces to ensure proper routing. You might want to set the container’s eth0 to use DHCP or assign it an IP within the 192.168.1.x range.

Enable IP Forwarding​

Make sure IP forwarding is enabled on the host:
bash
echo 1 > /proc/sys/net/ipv4/ip_forward

Check Firewall Rules​

Verify that your iptables rules allow traffic from your container to the internet:
bash
iptables -L -t nat

Restart Networking​

After making changes, restart networking services:
bash
systemctl restart networking

Test Connectivity​

Try pinging an external site from within the container:
bash
ping google.com
If you still face issues, consider checking logs for any errors or consult with a network engineer for further assistance.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!