Intermittent LXC connectivity

drogo

Active Member
Dec 18, 2017
8
0
41
54
I'm experiencing an odd issue with one of my LXC containers. It has intermittent network connectivity to anything beyond the proxmox host.

I recently re-installed my entire system going to 5.1 from an old 4.2 installation where everything worked. I backed up ~15 LXC containers and 3 QEMU VMs to an NFS share, then restored those backups to the newly built system. All systems work fine with no issues except for this one LXC container. The container is running Ubiquiti UniFi services, so it's on an older template, Ubuntu 14.04.

It's configured with a static IP and routing, the host is running vmbr0. When the issue occurs, the container can still ping the host, but not the gateway router beyond the host. I tried removing the NIC and re-adding it to the container, but no change. Same issue.

Any ideas?
 
how does your network/container config look like?
 
It's pretty basic. The proxmox host (prox01) is 10.0.0.40/24.

root@prox01:/etc/pve/lxc# cat 103.conf
arch: amd64
cpulimit: 2
cpuunits: 1024
hostname: unifi01
memory: 2048
net0: name=eth0,bridge=vmbr0,gw=10.0.0.1,hwaddr=DA:F6:61:E0:28:65,ip=10.0.0.110/24,type=veth
ostype: ubuntu
rootfs: dpool:subvol-103-disk-1,size=64G
swap: 512
 
mhhm what does
Code:
ip addr
ip route
ip link
inside the container say (would be good if you could save the output when it works and when it does not work)
 
Hunh. Actually, it loses all connectivity. When it was able to ping the host, it must've been in the middle of cutting over. Still trying to catch it when it does work.


root@unifi01:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
38: eth0@if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether da:f6:61:e0:28:65 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.110/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::d8f6:61ff:fee0:2865/64 scope link
valid_lft forever preferred_lft forever
root@unifi01:~# ip route
default via 10.0.0.1 dev eth0
10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.110
root@unifi01:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
38: eth0@if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether da:f6:61:e0:28:65 brd ff:ff:ff:ff:ff:ff
root@unifi01:~# ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
^C
--- 10.0.0.1 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3070ms

root@unifi01:~# ping -c1 10.0.0.40
PING 10.0.0.40 (10.0.0.40) 56(84) bytes of data.
^C
--- 10.0.0.40 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
 
Last edited:
is any firewall (host/guest) active ?
 
maybe the ip and/or mac address is in use by another machine on the network?
 
Nope. It's a static and nothing else responds when trying to reach it.
 
I'm able to force to work for a few seconds at a time by flushing the arp table. So here it is not working, a flush, then it works for a short bit (10-15 seconds later it stops working again.

root@unifi01:/etc/resolvconf# ping -c1 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.

--- 10.0.0.1 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

root@unifi01:/etc/resolvconf# ip route
default via 10.0.0.1 dev eth0
10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.110
root@unifi01:/etc/resolvconf# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
44: eth0@if45: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether da:f6:61:e0:28:65 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.110/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
root@unifi01:/etc/resolvconf# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
44: eth0@if45: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether da:f6:61:e0:28:65 brd ff:ff:ff:ff:ff:ff
root@unifi01:/etc/resolvconf# ip n flush dev eth0
root@unifi01:/etc/resolvconf# ping -c1 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms

--- 10.0.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms
root@unifi01:/etc/resolvconf# ip route
default via 10.0.0.1 dev eth0
10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.110
root@unifi01:/etc/resolvconf# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
44: eth0@if45: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether da:f6:61:e0:28:65 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.110/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
root@unifi01:/etc/resolvconf# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
44: eth0@if45: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether da:f6:61:e0:28:65 brd ff:ff:ff:ff:ff:ff
 
I'm able to force to work for a few seconds at a time by flushing the arp table. So here it is not working, a flush, then it works for a short bit (10-15 seconds later it stops working again.
can you do an 'arp -v' when it works and when it does not work?
 
root@unifi01:~# ping -c2 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.

--- 10.0.0.1 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1020ms

root@unifi01:~# arp -vn
Address HWtype HWaddress Flags Mask Iface
10.0.0.132 ether 84:d6:d0:ed:b4:2e C eth0
10.0.0.141 ether 24:a4:3c:e8:b3:ab C eth0
10.0.0.106 ether 00:0e:0c:62:eb:d2 C eth0
10.0.0.130 ether f0:27:2d:56:53:b2 C eth0
10.0.0.13 ether 32:63:34:30:39:37 C eth0
10.0.0.143 ether 0c:47:c9:0e:b5:c7 C eth0
10.0.0.1 ether 38:60:77:8b:26:8b C eth0
Entries: 7 Skipped: 0 Found: 7
root@unifi01:~# ip n flush dev eth0
root@unifi01:~# ping -c2 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.155 ms

--- 10.0.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1023ms
rtt min/avg/max/mdev = 0.155/0.215/0.275/0.060 ms
root@unifi01:~# arp -vn
Address HWtype HWaddress Flags Mask Iface
10.0.0.141 ether 24:a4:3c:e8:b3:ab C eth0
10.0.0.106 ether 00:0e:0c:62:eb:d2 C eth0
10.0.0.13 ether 32:63:34:30:39:37 C eth0
10.0.0.1 ether 38:60:77:8b:26:8b C eth0
Entries: 4 Skipped: 0 Found: 4
root@unifi01:~#


By the way, thank you very much for you help on this. I'm tempted to just wipe and re-install the services on a new container with the same IP, but I'd like to figure out what's going on.
 
Resolved the issue.

Short answer, another device got configured with the IP of this container. Hence the intermittent connectivity. I found this after creating a new container, still getting the problem. Then tried a CentOS container, which complained about an IP conflict on starting. That gave me a MAC address to hunt down, which I eventually found.

Thanks again for your help!! Much appreciated!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!