I have rather weird issues with my OpenVZ containers using venet and running in Proxmox 1.9 (Kernel 2.6.32-6-pve, fully updated as of 2012-02-08) hosted at OVH.
The Setup:
- a private 172.16.0.0/12 and a public x.x.x.x/28 subnet are assigned to the VLAN-Interface
- the Containers are each provided with ONLY a public IP out of the /28 subnet to their venet-Interface
Problem:
At first the connectivity of the containers seem just fine for about four hours (the timeframe is reproducable).
After this time they are no longer accessible by other machines in the public x.x.x.x/28 subnet or the internet. They cannot ping other machines with public IPs, too.
The strange part is, that the containers can ping machines in the private 172.16.0.0/12 subnet all the time - but cannot be pinged from these machines.
The containers public IPs also respond when directly accessed from the HN.
This issues are gone (for the next four hours), when the containers are rebooted.
/etc/network/interfaces:
XXXX being the VLAN-ID
/etc/sysctl.conf:
route -n:
I hope someone has an idea how I could solve these issues...
Thanks in advance!
EDIT:
tcpdump -i eth0.XXXX shows that ARP-Requests to a affected IP are beeing received by the HN but are not answered.
If I ping from inside a container, "tcpdump -i eth.XXXX" on the HN shows that echo requests are beeing transmitted but again: ARP-Requests which are not beeing answered and no echo replies.
If I ping some machine in 172.16.0.0/12 there the echo replies get to the virtual container:
The Setup:
- a private 172.16.0.0/12 and a public x.x.x.x/28 subnet are assigned to the VLAN-Interface
- the Containers are each provided with ONLY a public IP out of the /28 subnet to their venet-Interface
Problem:
At first the connectivity of the containers seem just fine for about four hours (the timeframe is reproducable).
After this time they are no longer accessible by other machines in the public x.x.x.x/28 subnet or the internet. They cannot ping other machines with public IPs, too.
The strange part is, that the containers can ping machines in the private 172.16.0.0/12 subnet all the time - but cannot be pinged from these machines.
The containers public IPs also respond when directly accessed from the HN.
This issues are gone (for the next four hours), when the containers are rebooted.
/etc/network/interfaces:
Code:
# The loopback network interface
auto lo
iface lo inet loopback
# for Routing
auto vmbr1
iface vmbr1 inet manual
post-up /etc/pve/kvm-networking.sh
bridge_ports dummy0
bridge_stp off
bridge_fd 0
# vmbr0: Bridging. Make sure to use only MAC adresses that were assigned to you.
auto vmbr0
iface vmbr0 inet static
address xxx.xxx.xxx.xxx
netmask xxx.xxx.xxx.xxx
network xxx.xxx.xxx.xxx
broadcast xxx.xxx.xxx.xxx
gateway xxx.xxx.xxx.xxx
bridge_ports eth0
bridge_stp off
bridge_fd 0
auto eth0.XXXX
iface eth0.XXXX inet static
address 172.16.0.2
netmask 255.240.0.0
post-up ip r a 172.16.0.0/12 via 172.31.255.254 dev eth0.XXXX; true
auto eth0.XXXX:0
iface eth0.XXXX:0 inet static
address xxx.xxx.xxx.2
network xxx.xxx.xxx.0
broadcast xxx.xxx.xxx.15
gateway xxx.xxx.xxx.14
netmask 255.255.255.240
post-up /sbin/ip route add default via xxx.xxx.xxx.14 dev eth0.XXXX table 125
post-up /sbin/ip rule add from xxx.xxx.xxx.0/28 table 125
post-down /sbin/ip route del default via xxx.xxx.xxx.14 dev eth0.XXXX table 125
post-down /sbin/ip rule del from xxx.xxx.xxx.0/28 table 125
/etc/sysctl.conf:
Code:
net.ipv4.ip_forward=1
net.ipv4.conf.default.forwarding=1
net.ipv4.conf.default.proxy_arp = 1
net.ipv4.conf.eth0/XXXX.proxy_arp = 1
route -n:
Code:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
xxx.xxx.xxx.7 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
xxx.xxx.xxx.9 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
xxx.xxx.xxx.8 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
xxx.xxx.xxx.11 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
xxx.xxx.xxx.10 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
xxx.xxx.xxx.0 0.0.0.0 255.255.255.240 U 0 0 0 eth0.XXXX
xxx.xxx.xxx.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr0
172.16.0.0 0.0.0.0 255.240.0.0 U 0 0 0 eth0.XXXX
0.0.0.0 xxx.xxx.xxx.14 0.0.0.0 UG 0 0 0 eth0.XXXX
0.0.0.0 xxx.xxx.xxx.254 0.0.0.0 UG 0 0 0 vmbr0
I hope someone has an idea how I could solve these issues...
Thanks in advance!
EDIT:
tcpdump -i eth0.XXXX shows that ARP-Requests to a affected IP are beeing received by the HN but are not answered.
If I ping from inside a container, "tcpdump -i eth.XXXX" on the HN shows that echo requests are beeing transmitted but again: ARP-Requests which are not beeing answered and no echo replies.
If I ping some machine in 172.16.0.0/12 there the echo replies get to the virtual container:
Code:
listening on eth0.XXXX, link-type EN10MB (Ethernet), capture size 96 bytes
03:57:41.596299 IP [I]PUBLIC-IP-OF-CONTAINER[/I] > 172.16.0.4: ICMP echo request, id 56072, seq 1, length 64
03:57:41.599311 arp who-has [I]PUBLIC-IP-OF-CONTAINER[/I] tell 172.16.0.4
03:57:41.904644 IP 172.16.0.4 > [I]PUBLIC-IP-OF-CONTAINER[/I]: ICMP echo reply, id 56072, seq 1, length 64
Last edited: