Network Quickly Dies in LXC Container

Tbutler

New Member
Jun 18, 2024
6
1
1
I'm trying to experiment with NAT'ed containers, with the goal of having two bridges available to Proxmox's guests: one that allows static IP assignment to the public facing guests and one that runs through a NAT for things that I'm just operating internally. I'm waiting for my data center to provide my static IP range, so I'm working on the NAT configuration and have gotten it so an LXC container can ping the outside world; however after five or six "pings" it is as if the networking stack crashed. Everything stops working until I reboot the container or reconfigure the network on it, presumably restarting the networking processes of the guest.

I'm running `pve-manager/8.2.4/faa83925c9641325 (running kernel: 6.8.8-1-pve)`. I just brought Proxmox up on Debian 12 on a freshly provisioned server.

Here's the guest configuration:

Code:
root@newcedar:~# ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
63: ethn0@if64: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether bc:24:11:ab:22:e8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.10.2/24 scope global ethn0
       valid_lft forever preferred_lft forever
    inet6 fe80::be24:11ff:feab:22e8/64 scope link
       valid_lft forever preferred_lft forever

And here is `/etc/network/interfaces/` on the Debian 12 host (with the IP address removed):

Code:
source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

auto eno8303
iface eno8303 inet manual

auto eno8403
iface eno8403 inet manual

iface ens3f0np0 inet manual

iface ens3f1np1 inet manual

auto bond0
iface bond0 inet static
        address xx.xx.xx.xx/30
        gateway xx.xx.xx.xx
        bond-slaves eno8303 eno8403
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit hash-policy layer2+3
        dns-nameservers 8.8.8.8 1.1.1.1
        dns-search serverstld
# dns-* options are implemented by the resolvconf package, if installed

auto vmbr0
iface vmbr0 inet static
        address 10.10.10.1
        netmask 255.255.255.0
        bridge-ports none
        bridge-stp off
        bridge-fd 0

post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o bond0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o bond0 -j MASQUERADE

I think the issue might be time based after `networking.service` restarts. If the container comes up from a reboot and I start a ping, it seems to be able to ping about 5-6 times before it stops. However, when I did `service networking restart` and then immediately pinged, I managed 21 successful pings before it stopped again.

Running `journalctl -u networking` shows nothing out of the ordinary of the system starting and stopping when I ask it to.

Thanks!
 
Last edited:
I've made a further discovery. If I watch for packets going out from my NAT'ed container from the Proxmox host, for the successful ones, it'll show its hostname as the sender. When it stops working, it starts showing its NAT'ed ip address (10.0.0.2) as the sender of the packet instead. This happens somewhere between 6 - 20 packets in. I've tried both Debian and CentOS containers...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!