Unleaded6120

New Member
Jun 19, 2023
2
0
1
Hello!
I have recently set up a PVE 8.0 workstation, with the following network configuration:
Code:
       [WAN]
         |
     [router]
         |
     (enp3s0)
         |
    [[PVE host]]
      /      \
(vmbr0)    (vmbr1)
   |        /    \
[OPNsense VM]    [all VMs and CTs]
where vmbr0 is a direct bridge to enp3s0, and vmbr1 is a virtual LAN for OPNsense.

/etc/network/interfaces
Code:
auto lo
iface lo inet loopback

auto enp3s0
iface enp3s0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.10/24
        gateway 192.168.1.254
        bridge-ports enp3s0
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet static
        address 10.10.1.10/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0


This way, all of the traffic from and to VMs and CTs passes through vmbr0 to reach OPNsense (which considers my physical house LAN as its own WAN) and then gets firewalled and translated to the respective IPs in OPNsense's virtual LAN (vmbr1).

This setup works very well, as all of my VMs (Fedora CoreOS, Fedora Server 38, Windows 10) have access to the internet, to my house LAN (OPNsense WAN) and to their own small network (OPNsense LAN).
I configured the DHCPv4 in OPNsense as well, and works correctly.


My problems start when I use LXC containers: configuring them through the web UI, I've tried all reasonable network configurations:

vmbr0 static (shouldn't necessarily work, as my home router's DHCP doesn't like self-assigned static IPs)
vmbr0 DHCP (should work, works for VMs and the whole house LAN)
vmbr1 static (should work)
vmbr1 DHCP (should work, would be my choice)
(in the static settings, I gave sensible IP and gateway configurations, that otherwise worked in other cases).

DNS domain, server: use host settings
DNS domain, server: 10.10.1.201

no matter what, my LXC containers don't get any working connection. Unprivileged / privileged also doesn't change the outcome.
The interface is down upon creation of the CT:
Code:
[root@fedora ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if46: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether <%%MAC_ADDRESS%%> brd ff:ff:ff:ff:ff:ff link-netnsid 0

And if I try to bring it up, there is no IPv4 configuration, it seems:
Code:
[root@fedora ~]# ip link set dev eth0 up
[root@fedora ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether <%%MAC_ADDRESS%%> brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::fcdc:4fff:fe37:a1b3/64 scope link tentative
       valid_lft forever preferred_lft forever

Ping test:
Code:
[root@fedora ~]# ping 8.8.8.8
ping: connect: Network is unreachable
[root@fedora ~]# ping 10.10.1.201
ping: connect: Network is unreachable
[root@fedora ~]# ping 10.10.1.10
ping: connect: Network is unreachable
where 10.10.1.201 is the OPNsense router, 10.10.1.10 is the PVE host.

All of this, only on some CT templates, though. For some bloody reason, Alpine (3.18) and Ubuntu (23.04) work flawlessly, out-of-the-box, all the time; whereas Debian (12), Fedora (38), Alma (I'm pretty sure I tested this one too) just don't work, and exhibit this behavior.

Example of a working Ubuntu CT:
Code:
root@ubuntu:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether <%%MAC_ADDRESS%%> brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.1.102/24 metric 1024 brd 10.10.1.255 scope global dynamic eth0
       valid_lft 4984sec preferred_lft 4984sec
    inet6 fe80::f0ef:56ff:fe2e:a79e/64 scope link
       valid_lft forever preferred_lft forever

I have searched for any possible PVE LXC networking problem on the web, and so far haven't managed to solve it.
I am currently just using Ubuntu and Alpine containers, but I can't just accept this as a fact and move on, I want things to work properly. And I don't want no Ubuntu :)

One last piece of evidence that may be helpful:
on the same CT templates that are network-borked, another thing happens. When I start the CT for the first time, there is never output on the Xterm.js console; if I go in with pct enter the CT is up and running, waiting for a login, but the console is blank and doesn't write nor read. From the next startup the console works as intended. The network though isn't working even when I pct enter, and neither on first boot. In Ubuntu and Alpine the console works from the first start.

Thanks in advance, if anybody can help me work this out! I have been smashing my head against it for days :')
 
Last edited:
Hello,

enable and start the services systemd-networkd.service in your running CT

Bash:
systemctl enable systemd-networkd.service
systemctl start systemd-networkd.service
ip a
shutdown -r now
ip a

Vlodek
 
Last edited:
Hello,

enable and start the services systemd-networkd.service in your running CT

Bash:
systemctl enable systemd-networkd.service
systemctl start systemd-networkd.service
ip a
shutdown -r now
ip a

Vlodek

Oh my god... Thank you, that was the easiest thing; I didn't even know that systemd-networkd was disabled by default.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!