[SOLVED] proxmox - ubuntu server 24 LXD connection issue

sahn

New Member
Oct 30, 2025
9
0
1
Hello everyone this is my first attempt at creating a proxmox with an ubuntu server so i can create a media streaming nas for my home.
i came so far but i am at a point where i do not know if this is the normal or an issue that i have to handle.

after installing the ct template ubuntu
ubuntu-24.04-standard 24.04-2 amd64.tar.zst
and creating and from the template creating an LXC .
i have run my ubuntu but it is giving me a failed to connect error on the login

this always happens after mounting the zfs pool
in the first instalation no issue is shown .
updates are tested.
after i add the zfs pool nas drive and open the ubuntu back again i get this .

```
Welcome to Ubuntu 24.04 LTS (GNU/Linux 6.14.8-2-pve x86_64)

* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/pro
Failed to connect to https://changelogs.ubuntu.com/meta-release-lts. Check your Internet connection or proxy settings
```

```
root@media-server:~# apt update
Ign:1 http://archive.ubuntu.com/ubuntu noble InRelease
Ign:2 http://archive.ubuntu.com/ubuntu noble-updates InRelease
Ign:3 http://archive.ubuntu.com/ubuntu noble-security InRelease
Ign:1 http://archive.ubuntu.com/ubuntu noble InRelease
Ign:2 http://archive.ubuntu.com/ubuntu noble-updates InRelease
Ign:3 http://archive.ubuntu.com/ubuntu noble-security InRelease
Ign:1 http://archive.ubuntu.com/ubuntu noble InRelease
Ign:2 http://archive.ubuntu.com/ubuntu noble-updates InRelease
Ign:3 http://archive.ubuntu.com/ubuntu noble-security InRelease
Err:1 http://archive.ubuntu.com/ubuntu noble InRelease
Temporary failure resolving 'archive.ubuntu.com'
Err:2 http://archive.ubuntu.com/ubuntu noble-updates InRelease
Temporary failure resolving 'archive.ubuntu.com'
Err:3 http://archive.ubuntu.com/ubuntu noble-security InRelease
Temporary failure resolving 'archive.ubuntu.com'
Reading package lists... Done
Building dependency tree... Done
All packages are up to date.
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/noble/InRelease Temporary failure resolving 'archive.ubuntu.com'
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/noble-updates/InRelease Temporary failure resolving 'archive.ubuntu.com'
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/noble-security/InRelease Temporary failure resolving 'archive.ubuntu.com'
W: Some index files failed to download. They have been ignored, or old ones used instead.
```

not only the update but all connection seems to be corrupted.
i know i am doing something wrong but i couldn't put my finger on it for a long time now.
i tested to add the mount option in the main shell
with "pct set 100 -mp0 /media-pool/mnt/media,mp=/mnt/media"
or create it manually on the gui
but all results in the same end.
1761852056525.png

1761852074498.png

i tested it with diffrent size / memory / core allocations at the main setup
i tested mounting sizes with
- 512
- 1024
- 1536
all end up at the same place

### DNS - Network Setup on LXC

Network ip - dhcp
1761852247216.png


the issue doesn't go away if i detach / remove the newly added mount as well.

## Setup Detail.

proxmox is setup on the ssd 250
2tb wd red nas added formated and turned into pool
1761852575739.png
 
Hi,

And what is your network configuration ?

Because your information tends to be a network issue, not a storage issue.

Best regards,
 
Hi,

And what is your network configuration ?

Because your information tends to be a network issue, not a storage issue.

Best regards,
Thank you for the reply JeanL

these are the network configurations of the proxmox server

1761896749080.png
1761896756970.png

```
root@proxmox:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether d8:cb:8a:e7:ed:3f brd ff:ff:ff:ff:ff:ff
altname enxd8cb8ae7ed3f
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d8:cb:8a:e7:ed:3f brd ff:ff:ff:ff:ff:ff
inet 192.168.1.2/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::dacb:8aff:fee7:ed3f/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
```


```
root@proxmox:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface enp2s0 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.1.2/24
gateway 192.168.1.1
bridge-ports enp2s0
bridge-stp off
bridge-fd 0


source /etc/network/interfaces.d/*
```

## Ping test

```
root@proxmox:~# ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.374 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.257 ms
64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=2.48 ms
^C
--- 192.168.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2054ms
rtt min/avg/max/mdev = 0.257/1.035/2.476/1.019 ms
root@proxmox:~# ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.014 ms
64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.009 ms
64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=0.011 ms
^C
--- 192.168.1.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2071ms
rtt min/avg/max/mdev = 0.009/0.011/0.014/0.002 ms

```


```
root@proxmox:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=115 time=16.7 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=115 time=16.6 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=115 time=16.6 ms
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 16.620/16.634/16.655/0.015 ms
```
 
Also this is the greatest related link i have found but sadly it seem to turn a deadend
 
Yes on the post i sent screenshots of the gui looks
this is the ip a results

```
root@test:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eth0@if8: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether bc:24:11:bc:0d:78 brd ff:ff:ff:ff:ff:ff link-netnsid 0
```

i have found another smilar post
from that i tried to test a
`systemctl enable systemd-networkd.service`
`systemctl start systemd-networkd.service`
but that also gave me an error

````
root@test:~# ping 8.8.8.8
ping: connect: Network is unreachable
root@test:~# systemctl enable systemd-networkd.service
root@test:~# systemctl start systemd-networkd.service
Job for systemd-networkd.service failed because the control process exited with error code.
See "systemctl status systemd-networkd.service" and "journalctl -xeu systemd-networkd.service" for details.
root@test:~# systemctl status systemd-networkd.service
x systemd-networkd.service - Network Configuration
Loaded: loaded (/usr/lib/systemd/system/systemd-networkd.service; enabled; preset: enabled)
Active: failed (Result: exit-code) since Fri 2025-10-31 08:22:36 UTC; 42s ago
TriggeredBy: x systemd-networkd.socket
Docs: man:systemd-networkd.service(8)
man:org.freedesktop.network1(5)
Process: 557 ExecStart=/usr/lib/systemd/systemd-networkd (code=exited, status=226/NAMESPACE)
Main PID: 557 (code=exited, status=226/NAMESPACE)
FD Store: 0 (limit: 512)
CPU: 3ms

Oct 31 08:22:36 test systemd[1]: systemd-networkd.service: Scheduled restart job, restart counter is at 5.
Oct 31 08:22:36 test systemd[1]: systemd-networkd.service: Start request repeated too quickly.
Oct 31 08:22:36 test systemd[1]: systemd-networkd.service: Failed with result 'exit-code'.
Oct 31 08:22:36 test systemd[1]: Failed to start systemd-networkd.service - Network Configuration.
```

```
root@test:~# journalctl -xeu systemd-networkd.service
--
-- Automatic restarting of the unit systemd-networkd.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Oct 31 08:22:36 test systemd[1]: Starting systemd-networkd.service - Network Configuration...
-- Subject: A start job for unit systemd-networkd.service has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit systemd-networkd.service has begun execution.
--
-- The job identifier is 1466.
Oct 31 08:22:36 test (networkd)[557]: systemd-networkd.service: Failed to set up mount namespacing: Permission denied
Oct 31 08:22:36 test systemd[1]: systemd-networkd.service: Main process exited, code=exited, status=226/NAMESPACE
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- An ExecStart= process belonging to unit systemd-networkd.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 226.
Oct 31 08:22:36 test systemd[1]: systemd-networkd.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit systemd-networkd.service has entered the 'failed' state with result 'exit-code'.
Oct 31 08:22:36 test systemd[1]: Failed to start systemd-networkd.service - Network Configuration.
-- Subject: A start job for unit systemd-networkd.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit systemd-networkd.service has finished with a failure.
--
-- The job identifier is 1466 and the job result is failed.
Oct 31 08:22:36 test systemd[1]: systemd-networkd.service: Scheduled restart job, restart counter is at 5.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Automatic restarting of the unit systemd-networkd.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Oct 31 08:22:36 test systemd[1]: systemd-networkd.service: Start request repeated too quickly.
Oct 31 08:22:36 test systemd[1]: systemd-networkd.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit systemd-networkd.service has entered the 'failed' state with result 'exit-code'.
Oct 31 08:22:36 test systemd[1]: Failed to start systemd-networkd.service - Network Configuration.
-- Subject: A start job for unit systemd-networkd.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit systemd-networkd.service has finished with a failure.
--
-- The job identifier is 1483 and the job result is failed.
```
 
Please show output of the LXC/CT configuration:
Code:
pct config <vmid>
# replace <vmid> with the actual LXC/CT ID
 
Hello all,
the issue is fixed after multiple attempts with DeepSEEK


idk how or what fixed it but the full log:
```
root@test:~# netplan apply

** (generate:589): WARNING **: 10:35:27.917: Permissions for /etc/netplan/50-cloud-init.yaml are too open. Netplan configuration should NOT be accessible by others.

** (process:588): WARNING **: 10:35:28.108: Permissions for /etc/netplan/50-cloud-init.yaml are too open. Netplan configuration should NOT be accessible by others.

** (process:588): WARNING **: 10:35:28.233: Permissions for /etc/netplan/50-cloud-init.yaml are too open. Netplan configuration should NOT be accessible by others.
systemd-networkd is not running, output might be incomplete.
Failed to reload network settings: Unit dbus-org.freedesktop.network1.service not found.
Falling back to a hard restart of systemd-networkd.service
Failed to restart systemd-networkd.service: Unit systemd-networkd.service is masked.
Traceback (most recent call last):
File "/usr/share/netplan/netplan_cli/cli/commands/apply.py", line 279, in command_apply
utils.networkctl_reload()
File "/usr/share/netplan/netplan_cli/cli/utils.py", line 131, in networkctl_reload
subprocess.check_call(['networkctl', 'reload'])
File "/usr/lib/python3.12/subprocess.py", line 413, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['networkctl', 'reload']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/sbin/netplan", line 23, in <module>
netplan.main()
File "/usr/share/netplan/netplan_cli/cli/core.py", line 58, in main
self.run_command()
File "/usr/share/netplan/netplan_cli/cli/utils.py", line 298, in run_command
self.func()
File "/usr/share/netplan/netplan_cli/cli/commands/apply.py", line 63, in run
self.run_command()
File "/usr/share/netplan/netplan_cli/cli/utils.py", line 298, in run_command
self.func()
File "/usr/share/netplan/netplan_cli/cli/commands/apply.py", line 284, in command_apply
utils.systemctl('restart', ['systemd-networkd.service'], sync=True)
File "/usr/share/netplan/netplan_cli/cli/utils.py", line 117, in systemctl
subprocess.check_call(command)
File "/usr/lib/python3.12/subprocess.py", line 413, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['systemctl', 'restart', 'systemd-networkd.service']' returned non-zero exit status 1.
```

in systemctl disable (step3)
```
root@test:~# systemctl disable systemd-networkd
Unit /etc/systemd/system/systemd-networkd.service is masked, ignoring.
The unit files have no installation config (WantedBy=, RequiredBy=, UpheldBy=,
Also=, or Alias= settings in the [Install] section, and DefaultInstance= for
template units). This means they are not meant to be enabled or disabled using systemctl.

Possible reasons for having these kinds of units are:
* A unit may be statically enabled by being symlinked from another unit's
.wants/, .requires/, or .upholds/ directory.
* A unit's purpose may be to act as a helper for some other unit which has
a requirement dependency on it.
* A unit may be started when needed via activation (socket, path, timer,
D-Bus, udev, scripted systemctl call, ...).
* In case of template units, the unit is meant to be enabled with some
instance name specified.
```
it is fixed after
```
ip link set eth0 up
ip addr add 192.168.1.30/24 dev eth0
ip route add default via 192.168.1.1
```

```
root@test:~# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eth0@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether bc:24:11:bc:0d:78 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.1.30/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::be24:11ff:febc:d78/64 scope link
valid_lft forever preferred_lft forever
```
ip adress show
```
root@test:~# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eth0@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether bc:24:11:bc:0d:78 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.1.30/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::be24:11ff:febc:d78/64 scope link
valid_lft forever preferred_lft forever
root@test:~# ip route show
default via 192.168.1.1 dev eth0
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.30
```
cat /etc/resolv.conf
```
root@test:~# cat /etc/resolv.conf
nameserver 192.168.1.1
nameserver 8.8.8.8
```

```
root@test:~# systemctl status systemd-networkd
x systemd-networkd.service
Loaded: masked (Reason: Unit systemd-networkd.service is masked.)
Active: failed (Result: exit-code) since Fri 2025-10-31 10:14:52 UTC; 26min ago
TriggeredBy: x systemd-networkd.socket
Main PID: 136 (code=exited, status=226/NAMESPACE)
CPU: 4ms

Oct 31 10:14:52 test systemd[1]: systemd-networkd.service: Main process exited, code=exited, status=226/NAMESPACE
Oct 31 10:14:52 test systemd[1]: systemd-networkd.service: Failed with result 'exit-code'.
Oct 31 10:14:52 test systemd[1]: Failed to start systemd-networkd.service - Network Configuration.
Oct 31 10:14:52 test systemd[1]: systemd-networkd.service: Scheduled restart job, restart counter is at 5.
Oct 31 10:14:52 test systemd[1]: systemd-networkd.service: Start request repeated too quickly.
Oct 31 10:14:52 test systemd[1]: systemd-networkd.service: Failed with result 'exit-code'.
Oct 31 10:14:52 test systemd[1]: Failed to start systemd-networkd.service - Network Configuration.
```

```
root@test:~# systemctl status NetworkManager
Unit NetworkManager.service could not be found.
```

# AI conversation copy paste


Solution: Configure Network Manually​

Step 1: Check current network interface​

bash
ip addr show
Look for your primary network interface (usually eth0)

Step 2: Configure network interface directly​

Edit the network configuration:

```bash
nano /etc/netplan/50-cloud-init.yaml
```
Replace the content with:

```yaml
network:
version: 2
ethernets:
eth0:
addresses:
- 192.168.1.30/24
routes:
- to: default
via: 192.168.1.1
nameservers:
addresses:
- 192.168.1.1
- 8.8.8.8

```

Step 3: Apply the network configuration​

```bash
netplan apply
```

Step 4: Test connectivity​

```bash
# Test gateway
ping -c 3 192.168.1.1

# Test internet
ping -c 3 8.8.8.8

# Test DNS
nslookup google.com
```

Alternative Solution: Use systemd-networkd with LXC-friendly configuration​

Since systemd-networkd is having permission issues, let's try a different approach:

Step 1: Create a simple network configuration​

```bash
cat > /etc/systemd/network/10-eth0.network << EOF
[Match]
Name=eth0

[Network]
Address=192.168.1.30/24
Gateway=192.168.1.1
DNS=192.168.1.1
DNS=8.8.8.8
EOF
```

Step 2: Configure DNS separately​

```bash
echo "nameserver 192.168.1.1" > /etc/resolv.conf
echo "nameserver 8.8.8.8" >> /etc/resolv.conf
```

Step 3: Stop and mask problematic services​

```bash
systemctl stop systemd-networkd
systemctl disable systemd-networkd
systemctl mask systemd-networkd
```

Step 4: Bring up interface manually​

```bash
# Bring up the interface
ip link set eth0 up
ip addr add 192.168.1.30/24 dev eth0
ip route add default via 192.168.1.1
```

Step 5: Test immediately​

```bash
ping -c 3 8.8.8.8
```
 
Please show output of the LXC/CT configuration:
Code:
pct config <vmid>
# replace <vmid> with the actual LXC/CT ID
Hello man thank you for the answer.
After the deepseek answer and spam change testing i done this is the new and working config for it
```
root@proxmox:~# pct config 100
arch: amd64
cores: 1
hostname: test
memory: 512
nameserver: 192.168.1.1
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=BC:24:11:BC:0D:78,ip=192.168.1.30/24,type=veth
ostype: ubuntu
rootfs: media-pool:subvol-100-disk-0,size=8G
searchdomain: local
swap: 512
```

i have tried to make it work on ubuntu server 24
i am going to test the other option templates to see if it is a 24 unique issue
since as i understand 22 has better " LTS" stability?
 
the issue only exist in the 24. version 22 test worked fine without any setup changes needed. i will close this post now and open another one to not cause any issues
Thank you so much for answering again