Network unavailable after LXC container restore

ramyalexis

New Member
Jun 13, 2023
6
1
3
Network has been lost after container restoration from backup.
Container OS: RockyLinux 9
Steps:
Restore container from backup on NFS.
No IP address on interface after start.
ifcfg-eth file with needed configuration is in place.

The probable reason i can see is UUID in the file is different from UUID reported by nmcli connection.
So i assume this happens. After container restore interface UUID is changed somehow and current ifcfg file does not reflect this situation.
If i delete and recreate interface from Proxmox GUI it creates interface with same (old) UUID and hence still no network.
Trying to change UUID to the value reported by nmcli and restarting NetworkManager but no luck. Still unconfigured interface.

The only workaround i found is to delete interface from Proxmox GUI, then delete ifcfg file. And recreate interface back.
This way container got its IP and such. It has new UUID of net in ifcfg file. But it is different from UUID reported by nmcli connection. So it is working until next restart.
After restart, container again looses network because interface became unconfigured again.

Hope i describe this clearly.

How can this be solved?
 
Last edited:
Hi,

Only one container have this issue, or all containers?

What PVE version are you using `pveversion -v`?

How look the network configuration of your server cat /etc/network/interfaces?
 
pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve)
pve-manager: 7.4-13 (running version: 7.4-13/46c37d9c)
pve-kernel-5.15: 7.4-3
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.107-1-pve: 5.15.107-1
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-1
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.7.2
pve-cluster: 7.3-3
pve-container: 4.4-4
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+1
pve-firewall: 4.3-4
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1

cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp9s0f0 inet manual

iface enp9s0f1 inet manual

auto eno1
iface eno1 inet manual

auto ens3f1
iface ens3f1 inet manual

auto bond0
iface bond0 inet manual
bond-slaves eno1 ens3f1
bond-miimon 100
bond-mode 802.3ad

auto vmbr0
iface vmbr0 inet static
address 10.177.11.102/24
gateway 10.177.11.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

All containers. I assume something happening with permissions after restore.
In containers i got a lot of errors like:

un 14 11:28:06 netbox sshd[6713]: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Jun 14 11:28:06 netbox sshd[6713]: @ WARNING: UNPROTECTED PRIVATE KEY FILE! @
Jun 14 11:28:06 netbox sshd[6713]: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Jun 14 11:28:06 netbox sshd[6713]: Permissions 0777 for '/etc/ssh/ssh_host_rsa_key' are too open.
Jun 14 11:28:06 netbox sshd[6713]: It is required that your private key files are NOT accessible by others.
Jun 14 11:28:06 netbox sshd[6713]: This private key will be ignored.

And like:
Wrong permission on /var/lib/pgsql/data/ . Without changing, postgresql do not want to start.

All these errors are come up after restore.
 
Last edited:
pct config 104
arch: amd64
cores: 4
features: nesting=1
hostname: netbox
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.177.11.1,hwaddr=A2:B4:DD:5F:55:42,ip=10.177.11.251/24,ip6=auto,type=veth
onboot: 0
ostype: centos
rootfs: data:vm-104-disk-0,size=20G
swap: 2048
unprivileged: 1
 
Last edited:
Thank you for the outputs!

In containers i got a lot of errors like:
These errors are in the CT itself from the syslog? if yes, may I ask you what the CT running in?

Can you do a test restore with debug when you start the CT? Obtain the debug log for the CT, do the following command, (you have to replace the CTID):

Bash:
lxc-start -n CTID -F -l DEBUG -o /tmp/lxc-CTID.log
 
These errors are in the CT itself from the syslog? Correct
if yes, may I ask you what the CT running in? - you mean OS? RockyLinux 9

Can you do a test restore with debug when you start the CT? Obtain the debug log for the CT, do the following command, (you have to replace the CTID):
As every container have such problem, the test plan is:
1. Create container with static IP. Done. It is working. Ping in place. Restart broke nothing. Everything is ok
2. Make container backup. Restore the container. Sure enough, container looses network connections. Again fcfg file in place. But interfaces looks like as non-configured.
3. Stop the container and run it from console with command provided.

Got these errors:

lxc-start -n 106 -F -l DEBUG -o /tmp/lxc-CTID.log
lxc-start: 106: ../src/lxc/conf.c: run_buffer: 322 Script exited with status 32
lxc-start: 106: ../src/lxc/start.c: lxc_init: 844 Failed to run lxc.hook.pre-start for container "106"
lxc-start: 106: ../src/lxc/start.c: __lxc_start: 2027 Failed to initialize container "106"
lxc-start: 106: ../src/lxc/conf.c: run_buffer: 322 Script exited with status 1
lxc-start: 106: ../src/lxc/start.c: lxc_end: 985 Failed to run lxc.hook.post-stop for container "106"
lxc-start: 106: ../src/lxc/tools/lxc_start.c: main: 306 The container failed to start
lxc-start: 106: ../src/lxc/tools/lxc_start.c: main: 311 Additional information can be obtained by setting the --logfile and --logpriority options

At the same time starting it from GUI poses no problem.

PS backup is on Synology NAS NFS.
 
Hmm, thank yo for the outputs and the infos!

Make container backup. Restore the container. Sure enough, container looses network connections. Again fcfg file in place. But interfaces looks like as non-configured.
Interesting... to narrow down the issue, I have two questions:
1. Does the restore from other backup storage, e.g., directory storage, occurs the same issue?
2. Can you check from share permission? Maybe the issue is related to the `root squash` setting in the NFS share?
 
1. I can not test this as i have very limited local storage space. So it will take time to setup something like this.
2 NFS share does not work for me with squash fs root option. So i use the configuration found somewhere on the internet
1686739337530.png
And
1686739545872.png
Without this combination (using map all to root and providing all permissions to root) the backup is not working with a well known rsync permissions error.
 
Oh, I foget to ask you to attach the /tmp/lxc-CTID.log file to this thread.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!