LXC Ubuntu 24.04-2 on PVE 9 broken

WolfTongue

New Member
Aug 9, 2025
2
0
1
Hello,

I recently upgraded to PVE 9 and all my machines are running. Right now I have an issue with the network of Ubuntu 24.04-2 LXC images (official templates). If I spawn them the network is dead and the interface shows link down.

During the upgrade from PVE 8 to PVE 9 I fenced something similar with the host. So I followed this instruction to solve this: https://forum.proxmox.com/threads/no-more-network-adapter-after-pbs-4-upgrade.169304/#post-790211

If I do the same inside the LXC it gets network and works again till the next reboot. I have no clue if this is a me problem or if something is broken in general. I have a cluster with 3 nodes (non HA) and on all three nodes (different hardware) it is the same.
I also tried to spawn 22.04 (works) and then `do-release-upgrade` to 24.04. Here the network also stops working.

But on all nodes I can perfectly spawn:
- Debian 12
- Ubuntu 22.04

Edit: Installing another image uf Ubuntu 24.04.3 LTS from linuxcontainers works for me - after installing `openssh-server` I can also ssh into it, so I think it is an error of the lxc image? (the one I used for testing: https://images.linuxcontainers.org/images/ubuntu/noble/amd64/default/)

Edit2: Ubuntu 24.04 containers existing before upgrade from PVE 8 to PVE 9 work fine, maybe it is because they are up2date with 24.04.3 and not 24.04.2 as the latest image is? (also the linuxcontainer image is 24.04.3 already)

Can someone help please?
 
Last edited:
I've just upgraded to PVE 9 and I'm seeing the same thing.

It looks like any container created from the ubuntu-24.04-standard_24.04-2_amd64.tar.zst image has this problem, regardless of whether it was created before or after the upgrade.

I'm running a single PVE node, and I haven't had any other networking probems with it prior to or during the upgrade.
 
I just tried it with the 24.04-standard_24.04-2_amd64.tar.zst image on a single-node PVE9 node. I configured an unpriviliged container with network settings to DHCP on a simple-vnet SDN with DHCP and enabled SNAT. After booting the container I logged in on the Proxmox console and did a ping 8.8.8.8 to googles dns server, worked like a charm.

I see no reason why an even simpler setup (with a container directly attached to the default vmbr0 bridge of the default ProxmoxVE install and DHCP enabled) shouldn't work. How did you configure networking on the ProxmoxVE host and in the container settings?