2 LXCs, same settings, one accesses internet, the other not

Piero

New Member
Aug 27, 2024
7
2
3
Hello,

LXC 703:

Code:
root@orion ~ # pct config 703
arch: amd64
cmode: shell
cores: 4
features: nesting=1
hostname: gitea
memory: 8096
mp0: /etc/pve/nodes/orion,mp=/certs
mp1: /orizsdb,mp=/gitdata
net0: name=eth0,bridge=vmbr1,hwaddr=ZZ:ZZ:ZZ:ZZ:22,ip=dhcp,type=veth
onboot: 1
ostype: debian
rootfs: local:703/vm-703-disk-0.raw,size=32G
startup: order=2
swap: 8096
tags: infra
lxc.mount.entry: /orizsdb /gitdata none rbind 0 0

LXC 707:

Code:
root@orion ~ # pct config 707
arch: amd64
cmode: shell
cores: 4
features: nesting=1
hostname: search
memory: 4096
net0: name=eth0,bridge=vmbr1,hwaddr=ZZ:ZZ:ZZ:ZZ:E8:AE,ip=dhcp,type=veth
onboot: 1
ostype: debian
rootfs: local:707/vm-707-disk-1.raw,size=40G
startup: order=2
swap: 512
tags: infra

From 703 :

Code:
ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=118 time=5.31 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=118 time=5.37 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=118 time=5.28 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=118 time=5.35 ms
^C
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 5.281/5.327/5.369/0.033 ms

from 707 :

Code:
ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
^C
--- 8.8.8.8 ping statistics ---
6 packets transmitted, 0 received, 100% packet loss, time 5131ms

from 703:
Code:
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ZZ:ZZ:ZZ:ZZ:1c:22 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.0.98/24 brd 10.10.0.255 scope global dynamic eth0
       valid_lft 392sec preferred_lft 392sec
    inet6 fe80::be24:11ff:fe69:1c22/64 scope link
       valid_lft forever preferred_lft forever
root@gitea:/root# ip route
default via 10.10.0.1 dev eth0
10.10.0.0/24 dev eth0 proto kernel scope link src 10.10.0.98

from 707:
Code:
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ZZ:ZZ:ZZ:ZZ:ae brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.0.109/24 brd 10.10.0.255 scope global dynamic eth0
       valid_lft 487sec preferred_lft 487sec
    inet6 fe80::be24:11ff:fe9a:e8ae/64 scope link
       valid_lft forever preferred_lft forever
root@search:/root# ip route
default via 10.10.0.1 dev eth0
10.10.0.0/24 dev eth0 proto kernel scope link src 10.10.0.109

Host /etc/network/interfaces :
Code:
auto lo
iface lo inet loopback

iface lo inet6 loopback

iface enp35s0 inet manual
#1GB

iface enp1s0 inet manual
#10GB

source /etc/network/interfaces.d/*

#Proxmox Interfaces
auto wan0
iface wan0 inet static 
      address XX.XX.XX.169/26
      gateway XX.XX.XX.129
      bridge-ports enp35s0
      bridge-stp   off
      bridge-hw enp35s0
      bridge-fd    0
      hwaddress    ether zz:zz:59:4d:28:cf
      up           sysctl -p
      up ip route add XX.XX.XX.169/32 dev wan0

auto vmbr1
iface vmbr1 inet static
        address 192.168.20.6/24
        bridge-ports enp1s0
        bridge-stp off
        bridge-fd 0
        up ip addr add 10.10.0.1/16 dev vmbr1
        hwaddress    ether zz:zz:zz:1f:e4:32
        post-up   echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s 10.0.0.0/9 -o wan0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s 10.0.0.0/9 -o wan0 -j MASQUERADE
        post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
        post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1

I've been struggling for some time, restarted the host several times, still getting the same result.
If one of you guys have an idea you are welcome.
 
Last edited:
Can they ping the gateway? Do you have the firewall enabled/any rules configured?
 
Hi Gabriel,

I have firewall disabled everywhere. Interesting thing, it works from time to time.
At the moment it has internet access. This morning it didn't.
When there's no internet access I can ping the DHCP server ( 10.10.0.2 ) but can't ping the gateway ( 10.10.0.1 )
I don't know what I'm missing here.
I'm running a proxmox cluster on 5 metal servers at Hetzner.
 
Not being able to ping the gateway is the main issue, because that's the default route your packets will take. I guess you wanna have a NAT type of setup – you can check here: https://pve.proxmox.com/wiki/Network_Configuration#sysadmin_network_masquerading for an example setup.
Note that your vmbr1 bridge sets a different address right when it is brought up (192.168.20.6/24 is changed to 10.10.0.1/16) and also you are masquerading a different subnet than the gateway one (/16 vs /9). Also your containers have another different subnet (/24).
 
I masquerade /9 because I thought I would extend the subnets in the future.
I went up with these settings after a lot of experimentations on a 5 nodes cluster hosted at Hetzner.
I have 2 NICs on each node with a dedicated 10Gb switch for internal networking.
I'm not a network expert, but what you're pointing out "192.168.20.6/24 is changed to 10.10.0.1/16" makes me think I set the same 10.10.0.1/16 address on each node, which is probably an bad config. At some point when the DHCP lease is renewed, the gateway address might not be the same, and the guest might loose the connection.
I can't experiment at will on the cluster, I'm afraid to break the network stack, so I ordered 3 mini pcs with 2 NICs to reproduce the environment at home.
Thanks for the hints.
 
Just wanted to share my new test env :
- 3 Beelink EQR5 AMD Ryzen
- Dual LAN ( 1 dedicated to VMs, 1 WAN )
- 36 CPUs for Proxmox + k3s
- next will upgrade to 64Gb RAM each
- total budget : 1161$


IMG_20241202_104930.jpg

IMG_20241202_105000.jpg
 
  • Like
Reactions: ggoller and UdoB