Cannot Ping on HOST

itgiec

New Member
Apr 16, 2024
8
0
1
Hello everyone, I am a newbie in proxmox ve I have version 8.2.2 and I would like to tell you about a super weird problem I have. From my Node and containers, I can PING to any part of my network except to the IP assigned to the HOST administration which is an HP PROLIANT ML30 server. I don't understand why. The proxmox and the containers have good connectivity. I want to check in more detail what could be happening. What do yours suggest to check?

By the way, all Firewalls are being deactivated.
 
Without any actual data or error message shown only my crystal ball could help. Unfortunately it is in maintenance mode currently.

Please start by giving us some information, like the copy-n-pasted output of the commands ip address show, ip route show and cat /etc/network/interfaces. And of course the command which generate an error message. (Please put each command in a separate [CODE]...[/CODE]-block.)
 
Without any actual data or error message shown only my crystal ball could help. Unfortunately it is in maintenance mode currently.

Please start by giving us some information, like the copy-n-pasted output of the commands ip address show, ip route show and cat /etc/network/interfaces. And of course the command which generate an error message. (Please put each command in a separate [CODE]...[/CODE]-block.)
Hi thank you for your response. Here is the information:

Code:
~# ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 5c:ed:8c:a0:59:14 brd ff:ff:ff:ff:ff:ff
    altname enp2s0f0
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 5c:ed:8c:a0:59:15 brd ff:ff:ff:ff:ff:ff
    altname enp2s0f1
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 5c:ed:8c:a0:59:14 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.37/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::5eed:8cff:fea0:5914/64 scope link
       valid_lft forever preferred_lft forever
5: veth100i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000
    link/ether fe:1d:b9:d5:0a:fc brd ff:ff:ff:ff:ff:ff link-netnsid 0
6: fwbr100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:a3:dc:98:a6:70 brd ff:ff:ff:ff:ff:ff
7: fwpr100p0@fwln100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 9a:31:36:b7:0e:63 brd ff:ff:ff:ff:ff:ff
8: fwln100i0@fwpr100p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000
    link/ether ce:a3:dc:98:a6:70 brd ff:ff:ff:ff:ff:ff
9: veth101i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:d4:b4:af:34:4f brd ff:ff:ff:ff:ff:ff link-netnsid 1
10: veth102i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr102i0 state UP group default qlen 1000
    link/ether fe:2c:6e:a9:d5:b9 brd ff:ff:ff:ff:ff:ff link-netnsid 2
11: fwbr102i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 56:11:af:8b:1f:0c brd ff:ff:ff:ff:ff:ff
12: fwpr102p0@fwln102i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether ce:cb:47:32:b2:f3 brd ff:ff:ff:ff:ff:ff
13: fwln102i0@fwpr102p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr102i0 state UP group default qlen 1000
    link/ether 56:11:af:8b:1f:0c brd ff:ff:ff:ff:ff:ff
18: tap103i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr103i0 state UNKNOWN group default qlen 1000
    link/ether e6:76:0d:43:03:4a brd ff:ff:ff:ff:ff:ff
19: fwbr103i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 06:42:b0:21:9f:bb brd ff:ff:ff:ff:ff:ff
20: fwpr103p0@fwln103i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 5a:6d:9c:4a:b2:a4 brd ff:ff:ff:ff:ff:ff
21: fwln103i0@fwpr103p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr103i0 state UP group default qlen 1000
    link/ether 06:42:b0:21:9f:bb brd ff:ff:ff:ff:ff:ff

Code:
# ip route show
default via 192.168.200.1 dev vmbr0 proto kernel onlink
192.168.200.0/24 dev vmbr0 proto kernel scope link src 192.168.200.37

Code:
# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.200.37/24
        gateway 192.168.200.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

Code:
# ping 192.168.200.4
PING 192.168.200.4 (192.168.200.4) 56(84) bytes of data.
From 192.168.200.37 icmp_seq=1 Destination Host Unreachable
From 192.168.200.37 icmp_seq=2 Destination Host Unreachable
From 192.168.200.37 icmp_seq=3 Destination Host Unreachable
From 192.168.200.37 icmp_seq=5 Destination Host Unreachable
From 192.168.200.37 icmp_seq=6 Destination Host Unreachable
From 192.168.200.37 icmp_seq=7 Destination Host Unreachable
From 192.168.200.37 icmp_seq=8 Destination Host Unreachable
From 192.168.200.37 icmp_seq=9 Destination Host Unreachable
From 192.168.200.37 icmp_seq=10 Destination Host Unreachable
From 192.168.200.37 icmp_seq=11 Destination Host Unreachable
From 192.168.200.37 icmp_seq=12 Destination Host Unreachable
From 192.168.200.37 icmp_seq=13 Destination Host Unreachable
From 192.168.200.37 icmp_seq=14 Destination Host Unreachable
From 192.168.200.37 icmp_seq=15 Destination Host Unreachable
^C
--- 192.168.200.4 ping statistics ---
17 packets transmitted, 0 received, +14 errors, 100% packet loss, time 16376ms

I should mention that the HOST where PROXMOX is installed does not have a firewall installed, in this case ILO5 from HPE.

Best Regards.
 
I am not sure if I understand this setup correctly. 192.168.200.4 is ILO5? With a dedicated NIC? Or a single shared-with-OS one?

Other connections like to your router (ping 192.168.200.1) do work?

So you have a problem with ILO, not PVE? ;-)

Can 192.168.200.4 be pinged from another computer in that LAN?

Sorry, I am not a ILO specialist, but HP may have implemented some "surprising" behavior...
 
I am not sure if I understand this setup correctly. 192.168.200.4 is ILO5? With a dedicated NIC? Or a single shared-with-OS one?

Other connections like to your router (ping 192.168.200.1) do work?

So you have a problem with ILO, not PVE? ;-)

Can 192.168.200.4 be pinged from another computer in that LAN?

Sorry, I am not a ILO specialist, but HP may have implemented some "surprising" behavior...
Hi UdoB

Yes, 200.4 is my Static IP in Shared Network Port in ILo5. The bridge works under the same shared port, and yes I can make a ping to any other online devices in my subnet and even on my other networks per VLAN. Yes, I thought the problem was ILo but if I ping it from a Host outside of Proxmox, I can see the Server so I think the fault is between the Proxmox and the HOST.

I implemented an SNMP service in Zabbix inside a container in Proxmox which worked fine until 4 days ago when it stopped working and I realized that I was no longer getting ping from Proxmox or any container inside Proxmox. I will check with HPE to see if ILo has some kind of firewall.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!