[SOLVED] Unable to SSH/Ping Linux VMs from Within Proxmox Shell

ploxxer

Member
Dec 31, 2021
18
1
8
27
I have run into a bit of a peculiar issue when trying to set up InfluxDB within my PVE environment. Any time I am using a console/shell from one of my actual PVE hosts and attempt to ping or SSH a Linux VM from within my environment, I am unable to:

Code:
root@lola:~# ping 10.10.80.45
PING 10.10.80.45 (10.10.80.45) 56(84) bytes of data.
From 10.10.80.0 icmp_seq=9 Destination Host Unreachable
From 10.10.80.0 icmp_seq=10 Destination Host Unreachable
From 10.10.80.0 icmp_seq=11 Destination Host Unreachable
^C
--- 10.10.80.45 ping statistics ---
13 packets transmitted, 0 received, +3 errors, 100% packet loss, time 12292ms
pipe 4
root@lola:~# ssh 10.10.80.45
ssh: connect to host 10.10.80.45 port 22: No route to host
root@lola:~#

Even more peculiar, I *AM* able to ping Windows VMs within my PVE environment:

Code:
root@lola:~# ping 10.10.80.67
PING 10.10.80.67 (10.10.80.67) 56(84) bytes of data.
64 bytes from 10.10.80.67: icmp_seq=7 ttl=128 time=0.595 ms
64 bytes from 10.10.80.67: icmp_seq=8 ttl=128 time=0.616 ms
64 bytes from 10.10.80.67: icmp_seq=9 ttl=128 time=0.584 ms
64 bytes from 10.10.80.67: icmp_seq=10 ttl=128 time=0.580 ms
^C
--- 10.10.80.67 ping statistics ---
10 packets transmitted, 4 received, 60% packet loss, time 9197ms
rtt min/avg/max/mdev = 0.580/0.593/0.616/0.014 ms
root@lola:~#

I can also confirm that I can ping the Proxmox hosts *from* a Linux VM:


Code:
$ ping 10.10.10.110
PING 10.10.10.110 (10.10.10.110) 56(84) bytes of data.
64 bytes from 10.10.10.110: icmp_seq=1 ttl=64 time=0.636 ms
64 bytes from 10.10.10.110: icmp_seq=2 ttl=64 time=0.420 ms
64 bytes from 10.10.10.110: icmp_seq=3 ttl=64 time=0.438 ms
64 bytes from 10.10.10.110: icmp_seq=4 ttl=64 time=0.387 ms
^C
--- 10.10.10.110 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3074ms
rtt min/avg/max/mdev = 0.387/0.470/0.636/0.097 ms

I'm not sure why I would be able to ping out towards Windows VMs but not Linux VMs, with all things being equal. There are no firewall rules in place that would prevent communication and these VMs are all on the same subnet and VLAN.

Here is my `/etc/network/interfaces` for reference. Is there something I'm overlooking or something that needs to be enabled on the Linux VM side?

Code:
auto lo
iface lo inet loopback

iface enp35s0 inet manual

iface enxd6f6dc0112ee inet manual

iface enp36s0 inet manual

iface enxdad9ab886db9 inet manual

auto vmbr10
iface vmbr10 inet static
    address 10.10.10.110/24
    gateway 10.10.10.1
    bridge-ports enp35s0.10
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094

auto vmbr30
iface vmbr30 inet static
    address 10.10.30.0/24
    bridge-ports enp35s0.30
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094
#External VM Network

auto vmbr80
iface vmbr80 inet static
    address 10.10.80.0/24
    bridge-ports enp35s0.80
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094
#Internal VM Network

auto vmbr40
iface vmbr40 inet static
    address 10.10.40.0/24
    bridge-ports enp35s0.40
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094
#VPN Network

iface vmbr40 inet static
    address 10.10.40.0/24
#VPN Network

auto vmbr110
iface vmbr110 inet static
    address 10.10.110.0/24
    bridge-ports enp35s0.110
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094
#Security Network

iface vmbr110 inet static
    address 10.10.110.0/24
#Security Network

auto vmbr52
iface vmbr52 inet static
    address 10.10.52.0/24
    bridge-ports enp35s0.52
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094
#Cluster Network

Please let me know if you see what I am not seeing!
 
From your output when you ping from Linux VM to PVE your icmp communication is on 10.10.10.0/24 network.
When you ping from PVE to VM you are using 10.10.80.0/24.

There could be a few reasons, in order of likelihood:
1) You are using wrong IP (check with "ip a")
2) Linux VM is not on 10.10.80.0/24 (check with "ip a")
3) Firewall on VM is not setup to accept traffic on 10.10.80.0/24 (check with tools specific to your Linux Distro/Release/Firewall)
4) Some sort of misconfiguration

Good luck

Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
From your output when you ping from Linux VM to PVE your icmp communication is on 10.10.10.0/24 network.
When you ping from PVE to VM you are using 10.10.80.0/24.

There could be a few reasons, in order of likelihood:
1) You are using wrong IP (check with "ip a")
2) Linux VM is not on 10.10.80.0/24 (check with "ip a")
3) Firewall on VM is not setup to accept traffic on 10.10.80.0/24 (check with tools specific to your Linux Distro/Release/Firewall)
4) Some sort of misconfiguration

Good luck

Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
1. I have confirmed the IP addresses of both the Proxmox host and the VMs
- Proxmox host inet 10.10.10.110/24 scope global vmbr10
2. I have confirmed the IP of the Linux host, am able to SSH to it, and am able to access the services that are being hosted off of it
- inet 10.10.80.45/24 brd 10.10.80.255 scope global dynamic ens18
3. I am currently using Ubuntu and have not configured the firewall in any way. The issue does not seem to stem from the VMs, but the Proxmox host, itself. I am able to ping the Proxmox host FROM the VM, just not the other way around, pinging a Linux VM from the Proxmox host. As previously mentioned, I AM able to ping Windows VMs on the same subnet and VLAN, so I don't believe this is an issue with my firewall rules on my actual router/firewall.
 
Your information between the posts is inconsistent. In the opening message:
from one of my actual PVE hosts and attempt to ping or SSH a Linux VM from within my environment, I am unable to:
In a subsequent message:
have confirmed the IP of the Linux host, am able to SSH to it, and am able to access the services that are being hosted off of it
- inet 10.10.80.45

I am going to put a doughnut that the problem is (4) misconfiguration.
All your interfaces, except vmbr10, are set to x.x.x.0 address... Which is not a valid IP...


Good luck

Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Your information between the posts is inconsistent. In the opening message:

In a subsequent message:


I am going to put a doughnut that the problem is (4) misconfiguration.
All your interfaces, except vmbr10, are set to x.x.x.0 address... Which is not a valid IP...


Good luck

Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I have since simplified/cleaned up my `/etc/network/interfaces` file to look like this


Code:
auto lo
iface lo inet loopback

iface enp35s0 inet manual

iface enxd6f6dc0112ee inet manual

iface enp36s0 inet manual

iface enxdad9ab886db9 inet manual

auto vmbr69
iface vmbr69 inet manual
    bridge-ports enp35s0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094


auto vmbr69.10
iface vmbr69.10 inet static
    address 10.10.10.110/24
    gateway 10.10.10.1

auto vmbr69.30
iface vmbr69.30 inet static
    address 10.10.30.0/24


auto vmbr69.40
iface vmbr69.40 inet static
    address 10.10.40.0/24


auto vmbr69.52
iface vmbr69.52 inet static
    address 10.10.52.0/24


auto vmbr69.80
iface vmbr69.80 inet static
    address 10.10.80.0/24


auto vmbr69.110
iface vmbr69.110 inet static
    address 10.10.110.0/24
I am, however, still experiencing the issue. The new config simply establishes separate VLANs with their corresponding subnets on one bridge, rather than several bridges, like in my previous config.
 
Last edited:
I was wrong about the IP simply establishing the subnet. I have since redone my /etc/network/interfaces to something that uses valid IPs and appear to be able to have things working properly.

My understanding of how some of this works was incorrect
 
  • Like
Reactions: bbgeek17

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!