can't ping containers

netbone

Member
Feb 5, 2009
93
0
6
I really become silly now.

I first had a 2.x cluster with 3 nodes.

1 node crashed and it was not possible to remove the crashed node from the cluster.

I tried to setup the node completly new and first it was possible to add the node again to the cluster.

Then the IP addresses and hostnames changed - the result was, that the cluster crashed again.

Now I have installed all without clustering as standalone nodes.

I have added 2 vz containers.

The containers have 10 ip addresses.

From the proxmox server it is possible to reach the nodes:

root@pve1 ~ $ ping 1.2.100.50
PING 1.2.100.50 (1.2.100.50) 56(84) bytes of data.
64 bytes from 1.2.100.50: icmp_req=1 ttl=64 time=0.038 ms
64 bytes from 1.2.100.50: icmp_req=2 ttl=64 time=0.033 ms
64 bytes from 1.2.100.50: icmp_req=3 ttl=64 time=0.034 ms
^C
--- 1.2.100.50 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.033/0.035/0.038/0.002 ms


From outside it is not possible to reach the ip addresses of the containers ... but! - not all:

I tried a couple of ip addresses all from the same /24 network.

Some are reachable, some not.

I created another vz container with the same problem:

One IP can be reached the other can't.

Every time I boot the host and the nodes other IP addresses become reachable, others are not.

I did not unterstand what happens here.

Any answers would be helpful.
 
the arp requested is received by the host:

listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
07:32:55.714145 ARP, Request who-has 1.2.100.50 tell 1.2.100.1, length 46
07:32:56.315082 ARP, Request who-has 1.2.100.50 tell 1.2.100.1, length 46

But the host did not answer.

And the host itself can ping the container:

root@pve1 ~ $ ping 1.2.100.50
PING 1.2.100.50 (1.2.100.50) 56(84) bytes of data.
64 bytes from 1.2.100.50: icmp_req=1 ttl=64 time=0.060 ms
^C
--- 1.2.100.50 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms

venet0 is up

venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet6 addr: fe80::1/128 Scope:Link
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:1416 errors:0 dropped:0 overruns:0 frame:0
TX packets:1298 errors:0 dropped:7 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:105362 (102.8 KiB) TX bytes:104968 (102.5 KiB)


The container can't ping anything outside:

entered into CT 123
root@node050:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms

root@node050:/# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default * 0.0.0.0 U 0 0 0 venet0

IPs inside container are correct:

venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:112 errors:0 dropped:0 overruns:0 frame:0
TX packets:235 errors:0 dropped:6 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:9408 (9.1 KiB) TX bytes:17666 (17.2 KiB)

venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:1.2.100.233 P-t-P:1.2.100.233 Bcast:0.0.0.0 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1

venet0:1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:1.2.100.50 P-t-P:1.2.100.50 Bcast:0.0.0.0 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1

IP address which is configurated as venet0:0 can be reached from outside, but seems to be answered by the host instead of the node, cause inside node I can't see any network traffic with dump when I ping the IP. (dump inside container shows no traffic listening on venet0, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes).

On host:

root@pve1 ~ $ cat /etc/sysctl.conf
#
# /etc/sysctl.conf - Configuration file for setting system variables
# See /etc/sysctl.d/ for additonal system variables
# See sysctl.conf (5) for information.
#

#kernel.domainname = example.com

# Uncomment the following to stop low-level messages on console
#kernel.printk = 3 4 1 3

##############################################################3
# Functions previously found in netbase
#

# Uncomment the next two lines to enable Spoof protection (reverse-path filter)
# Turn on Source Address Verification in all interfaces to
# prevent some spoofing attacks
#net.ipv4.conf.default.rp_filter=1
#net.ipv4.conf.all.rp_filter=1

# Uncomment the next line to enable TCP/IP SYN cookies
# See http://lwn.net/Articles/277146/
# Note: This may impact IPv6 TCP sessions too
#net.ipv4.tcp_syncookies=1

# Uncomment the next line to enable packet forwarding for IPv4
#net.ipv4.ip_forward=1

# Uncomment the next line to enable packet forwarding for IPv6
# Enabling this option disables Stateless Address Autoconfiguration
# based on Router Advertisements for this host
#net.ipv6.conf.all.forwarding=1


###################################################################
# Additional settings - these settings can improve the network
# security of the host and prevent against some network attacks
# including spoofing attacks and man in the middle attacks through
# redirection. Some network environments, however, require that these
# settings are disabled so review and enable them as needed.
#
# Do not accept ICMP redirects (prevent MITM attacks)
#net.ipv4.conf.all.accept_redirects = 0
#net.ipv6.conf.all.accept_redirects = 0
# _or_
# Accept ICMP redirects only for gateways listed in our default
# gateway list (enabled by default)
# net.ipv4.conf.all.secure_redirects = 1
#
# Do not send ICMP redirects (we are not a router)
#net.ipv4.conf.all.send_redirects = 0
#
# Do not accept IP source route packets (we are not a router)
#net.ipv4.conf.all.accept_source_route = 0
#net.ipv6.conf.all.accept_source_route = 0
#
# Log Martian Packets
#net.ipv4.conf.all.log_martians = 1
#
net.ipv6.conf.all.proxy_ndp=1
net.ipv4.ip_forward=1

root@pve1 ~ $ pvecm status
cman_tool: Cannot open connection to cman, is it running ?
root@pve1 ~ $

Additional the routing looks absolutly correct:

root@pve1 ~ $ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
1.2.100.231 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
1.2.100.50 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
1.2.100.51 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
1.2.100.52 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
1.2.100.53 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
1.2.100.54 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
1.2.100.55 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
1.2.100.56 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
1.2.100.57 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
1.2.100.58 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
1.2.100.11 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
1.2.100.59 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
1.2.100.232 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
1.2.100.233 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
6.7.171.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
0.0.0.0 6.7.171.1 0.0.0.0 UG 0 0 0 eth0

Inside container:

root@node050:/# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 venet0

Completly confused. Help please.
 
Last edited:
No. Have not tried. With Proxmox 1.x this was not necessary.

Additonal - why it is possible to reach always one of the IP addresses I gave the container, but not all IPs (even when they are from the same network)?

The iptables rule is already removed - that was a test, but without ip tables rule I am in the same position like described before.


root@pve1 ~ $ iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
 
everything correct inside container, too:

auto venet0:0
iface venet0:0 inet static
address 1.2.100.11
netmask 255.255.255.255

auto venet0:1
iface venet0:1 inet static
address 1.2.100.232
netmask 255.255.255.255
 
Now I have added an eth0 in webgui, but it is not visible inside the container by trying with ifconfig. I have also tried to add manually in network/interfaces an eth0 entry - still not visible.

CropperCapture[45].jpg

and remember venet0:0 is not reachable, venet0:1 is pingable ... did not understand this. But the ping comes not from the container - it comes from the host - this is verified by ssh-key and logintest.

I get arp request from router on host, but host did not answer:

13:22:18.136244 ARP, Request who-has 1.2.100.11 tell 1.2.100.1, length 46

Now found with webmin:

veth120.0 Unknown VLAN No address configured None Up

on other Proxmox hosts this entry is not shown for containers.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!