Lxc no network after restore

Morphushka

Well-Known Member
Jun 25, 2019
49
7
48
35
Syberia
Hello.
I want to move ct from server to server (no cluster)
Server where ct located (LVM):
pve-manager/6.1-7/13e58d5e (running kernel: 5.3.18-1-pve)
Server where I want to move (ZFS):
pve-manager/6.2-4/9824574a (running kernel: 5.4.34-1-pve)

I make backup of my container (stop mode). Send archive to another server and restore it there:
pct restore 107 vzdump-lxc-106-2020_10_05-12_17_41.tar.lzo -storage mega

Where is "mega" my zfs pool. No errors, container runs, but network wont work.
Inside the container "ip a" shows me link up:
Code:
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if111: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 76:cf:f6:bf:22:00 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.200.6/24 brd 192.168.200.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::74cf:f6ff:febf:2200/64 scope link
       valid_lft forever preferred_lft forever

I remove network device and add again - but nothing change.
I run container with lxc-start:
lxc-start -n 107 -F -l DEBUG -o /tmp/lxc-107.log
Log is attached. Nothing is criminal as I can see.

After 30-40 minutes network begin to work (what?!). If I reboot container - again no network.
Containers created on this host has no this problem.
What can be wrong ? How to fix it ?
 

Attachments

  • lxc-start.txt
    18 KB · Views: 3
  • Like
Reactions: Racoon263
hi,

could it be that this IP is already used by another VM/CT?

or the network configuration needs to be adapted for the new node?

can you post the container config: pct config 107, and cat /etc/network/interfaces from the new node?
 
Hi!
could it be that this IP is already used by another VM/CT?
No, on old server those containers are stopped.
or the network configuration needs to be adapted for the new node?
I try to check this direction.

pct config 107
Code:
arch: amd64
cores: 1
hostname: backup
memory: 1024
nameserver: 91.219.*.*
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.200.1,hwaddr=76:CF:F6:BF:22:00,ip=192.168.200.6/24,type=veth
ostype: debian
rootfs: mega:subvol-107-disk-0,size=1G
searchdomain: 91.219.*.*
swap: 0

192.168.200.0/24 - special subnet for vps containers. CISCO has routes for this subnet and redirects requests to vps.

From the host (hide ip parts): cat /etc/network/interfaces
Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

iface ens1f0 inet manual

iface ens1f1 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves ens1f0 ens1f1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

auto vmbr0
iface vmbr0 inet static
        address 91.*.*.*/26
        gateway 91.*.*.*
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
 
or the network configuration needs to be adapted for the new node?
If network is fine, changing ip has 1-2 second delay to apply. I try used ip addresses and new. After reboot - a lot of time.
And again - this is just for restored containers.
 
what do you see in the container journal? could you check journalctl -r and see if there are any entries related to the network config?
 
what do you see in the container journal? could you check journalctl -r and see if there are any entries related to the network config?
I restart ct and then look into journalctl -r
I don't found anything wrong or surprised (file attached).

More interesting in dmesg, there is a lot of messages of fwbr107i0 (my container bridge as I understand) about state (file attached):
Code:
[1023272.140121] fwbr107i0: port 2(veth107i0) entered blocking state
[1023272.140125] fwbr107i0: port 2(veth107i0) entered disabled state

and maybe start to work after this message:
fwbr107i0: port 2(veth107i0) entered forwarding state

Can you , please, look up this!
 

Attachments

  • journalctl.txt
    6.8 KB · Views: 3
  • dmesg.txt.txt
    4.3 KB · Views: 1

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!