lost all network connection to/from LXC containers...

plato79

Member
Nov 24, 2020
24
2
8
45
Hi,

Yesterday I was fiddling with my dev environment and its' lxc containers. I was actually checking if it's possible to have two docker instances one in the host and another in the lxc container and have swarm.. I know pretty stupid thing to do though this is not a production machine so I didn't mind that too much.

Well, I'm not sure if it was about that, but suddenly I lost ssh connection to lxc container. Then I realized the docker containers working on that lxc container also became inaccessible.

I rebooted the machine, checked again and again, but couldn't find anything wrong with them. The thing is they don't even reply to ping requests.

The host is accessible, but I cannot access lxc containers other than using lxc-console or lxc-attach. Even when I am inside I cannot access from LXC containers to anywhere.

No firewall configured on containers.

What could be the problem?

Here is pveversion -v output:

Code:
# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.39-3-pve)
pve-manager: 7.2-7 (running version: 7.2-7/d0dd0e85)
pve-kernel-5.15: 7.2-8
pve-kernel-helper: 7.2-8
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-6
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-3-pve: 5.15.35-6
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 15.2.16-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-7
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.5-1
proxmox-backup-file-restore: 2.2.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.5-1
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-11
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1

and here is configuration for the fileserver 101 ( one of the containers I was talking about ) which is created from turnkey fileserver template:

Code:
# pct config 101
arch: amd64
cores: 2
description: lxc.aa_profile%3A unconfined%0A
hostname: fileserver
memory: 1024
mp0: /media2,mp=/media
mp1: /media,mp=/Movies
mp2: /media/TV,mp=/TV
mp3: /jail/,mp=/mnt/jail
mp4: /mnt/pve/router,mp=/sda2
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=DA:72:91:4D:BD:E5,ip=192.168.1.32/24,type=veth
onboot: 1
ostype: debian
rootfs: ssd:subvol-101-disk-0,size=8G
swap: 1024

container:

Code:
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether da:72:91:4d:bd:e5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.1.32/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
      
# ip r
default via 192.168.1.1 dev eth0 onlink
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.32

# systemctl status
● fileserver
    State: degraded
     Jobs: 0 queued
   Failed: 4 units
    Since: Wed 2022-08-03 21:10:59 UTC; 5min ago
   CGroup: /
           ├─.lxc
           │ ├─449 /bin/bash
           │ ├─494 systemctl status
           │ └─495 less -X -R -F
           ├─init.scope
           │ └─1 /sbin/init
           └─system.slice
             ├─fail2ban.service
             │ └─200 /usr/bin/python3 /usr/bin/fail2ban-server -xf start
             ├─noip2.service
             │ └─197 /usr/local/bin/noip2
             ├─cron.service
             │ └─129 /usr/sbin/cron -f
             ├─nmbd.service
             │ └─202 /usr/sbin/nmbd --foreground --no-process-group
             ├─systemd-journald.service
             │ └─60 /lib/systemd/systemd-journald
             ├─ssh.service
             │ └─209 /usr/sbin/sshd -D
             ├─jitterentropy.service
             │ └─85 /usr/sbin/jitterentropy-rngd
             ├─supervisor.service
             │ ├─199 /usr/bin/python2 /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf
             │ └─430 /usr/bin/python3 /root/dev/blawarnut/nut.py -S
             ├─nfs-blkmap.service
             │ └─72 /usr/sbin/blkmapd
             ├─rsyslog.service
             │ └─122 /usr/sbin/rsyslogd -n -iNONE
             ├─console-getty.service
             │ └─218 /sbin/agetty -o -p -- \u --noclear --keep-baud console 115200,38400,9600 linux
             ├─rpcbind.service
             │ └─100 /sbin/rpcbind -f -w
             ├─system-postfix.slice


/etc/network# cat interfaces
# UNCONFIGURED INTERFACES
# remove the above line if you edit this file

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 192.168.1.32/24
        gateway 192.168.1.1

#auto eth1
#iface eth1 inet dhcp

Host:

Code:
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether ac:1f:6b:6d:9c:44 brd ff:ff:ff:ff:ff:ff
    altname enp5s0f0
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ac:1f:6b:6d:9c:45 brd ff:ff:ff:ff:ff:ff
    altname enp5s0f1
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 36:08:5b:a6:96:45 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.250/24 brd 192.168.1.255 scope global vmbr0
       valid_lft forever preferred_lft forever
5: docker_gwbridge: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:64:bc:bd:44 brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.1/16 brd 172.19.255.255 scope global docker_gwbridge
       valid_lft forever preferred_lft forever
6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:b4:e1:e9:32 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
8: vethb0963c9@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 8a:31:97:18:e9:54 brd ff:ff:ff:ff:ff:ff link-netnsid 0
10: veth1768362@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 96:b3:49:fc:a1:61 brd ff:ff:ff:ff:ff:ff link-netnsid 2
13: vethc683e37@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether fa:19:2d:7c:a8:97 brd ff:ff:ff:ff:ff:ff link-netnsid 3
15: veth42d698b@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether ca:bb:f7:5b:95:37 brd ff:ff:ff:ff:ff:ff link-netnsid 5
17: veth0ddaec3@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 36:5e:89:da:4c:af brd ff:ff:ff:ff:ff:ff link-netnsid 4
19: veth15034e3@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 26:88:03:69:0c:4c brd ff:ff:ff:ff:ff:ff link-netnsid 8
23: vethb9c2017@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 26:f8:d1:dc:2c:bc brd ff:ff:ff:ff:ff:ff link-netnsid 7
25: veth66e1c7d@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether c6:cb:9b:90:ae:94 brd ff:ff:ff:ff:ff:ff link-netnsid 6
    
# ip r
default via 192.168.1.1 dev vmbr0 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.19.0.0/16 dev docker_gwbridge proto kernel scope link src 172.19.0.1 linkdown
192.168.1.0/24 dev vmbr0 proto kernel scope link src 192.168.1.250
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!