[SOLVED] OpenVZ containers "losing" network and connectivity until I ping the def GW

wawawawa

Member
Mar 8, 2014
17
0
21
EDIT: I changed the networking setup on the OpenVZ containers from venet to veth and the problem is resolved.


=====================================================

Hi All

Proxmox host = Intel NUC with i5, 16GB RAM, SSD storage.

I have a few Debian 7 containers running a bunch of services. After a few minutes the containders lose connectivity and I am not able to see the containers on the network. When I am in the containers (vzctrl enter <VEID>) I am unable to ping anything. If I ping my def gateway, I get a response after one or two seconds and then connectivity is restored. It looks to me like the ARP cache is timing out and I am re-arping for the MAC of the Def GW.

Example:

Code:
root@samosa:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
^C
--- 8.8.8.8 ping statistics ---
14 packets transmitted, 0 received, 100% packet loss, time 12999ms

root@samosa:~# ping 192.168.0.1
PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.
64 bytes from 192.168.0.1: icmp_req=1 ttl=63 time=772 ms
64 bytes from 192.168.0.1: icmp_req=2 ttl=63 time=0.727 ms
64 bytes from 192.168.0.1: icmp_req=3 ttl=63 time=0.986 ms
64 bytes from 192.168.0.1: icmp_req=4 ttl=63 time=0.745 ms
^C
--- 192.168.0.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3001ms
rtt min/avg/max/mdev = 0.727/193.845/772.923/334.330 ms

root@samosa:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_req=1 ttl=47 time=12.3 ms
64 bytes from 8.8.8.8: icmp_req=2 ttl=47 time=12.7 ms
64 bytes from 8.8.8.8: icmp_req=3 ttl=47 time=11.2 ms
64 bytes from 8.8.8.8: icmp_req=4 ttl=47 time=12.2 ms
64 bytes from 8.8.8.8: icmp_req=5 ttl=47 time=13.1 ms
^C


Any ideas how I can fix or troubleshoot this more?

My /etc/network/interfaces config is below. Note, I am only using eth0 / vmbr0 at the moment.

Code:
root@naan:~# cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback


iface eth0 inet manual


iface eth1 inet manual


auto vmbr0
iface vmbr0 inet static
    address  192.168.0.3
    netmask  255.255.255.0
    gateway  192.168.0.1
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0


auto vmbr1
iface vmbr1 inet static
    address  192.168.255.1
    netmask  255.255.255.0
    bridge_ports eth1

Many thanks in advance for any suggestions.
 
Last edited: