VM cant ping on the internet, but the PVE is pinging normally

Evaldo

New Member
Aug 13, 2024
2
0
1
Hello All!

I am facing a problem with a Windows 10 VM that is not surfing on the internet. This VM is receiving the ip address from the router correctely.

In the PVE server I am getting ping on the internet normaly both ip address or site names.

In the host server there are 6 NICs, I created a bond (balance-rr) with these 6 network cards and after that a created a Linux Bridge and inclued this bond in it.

I started to work with Proxmox I few mounths ago, I really dont know if that problem is releated with my network configuration.

This is network config:

auto lo
iface lo inet loopback

auto ens1f0
iface ens1f0 inet manual

auto ens1f1
iface ens1f1 inet manual

auto ens1f2
iface ens1f2 inet manual

auto ens1f3
iface ens1f3 inet manual

auto eno8303
iface eno8303 inet manual

auto eno8403
iface eno8403 inet manual

auto bond0
iface bond0 inet manual
bond-slaves eno8303 eno8403 ens1f0 ens1f1 ens1f2 ens1f3
bond-miimon 100
bond-mode balance-rr

auto vmbr1
iface vmbr1 inet static
address 10.217.10.230/24
gateway 10.217.10.1
bridge-ports bond0
bridge-stp off
bridge-fd 0

source /etc/network/interfaces.d/*

Could you please help me in this issue?
 
Hi @Evaldo , welcome to the forum.

It would help if you can provide the network state (both the config file and the running state) of the VM.
Additionally, adding running state of the host might be helpful. Here is the list of commands you should examine, and if nothing jumps out at you - add to the thread:
cat /etc/network/interfaces
ip a
ip route
ping GW
ping hypervisor
- add second VM, can VMs ping each other?

Additionally, you should use CODE tags (available from the edit box menu) to make your information more readable.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hello,
Following the information:

cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto ens1f0
iface ens1f0 inet manual

auto ens1f1
iface ens1f1 inet manual

auto ens1f2
iface ens1f2 inet manual

auto ens1f3
iface ens1f3 inet manual

auto eno8303
iface eno8303 inet manual

auto eno8403
iface eno8403 inet manual

auto bond0
iface bond0 inet manual
bond-slaves eno8303 eno8403 ens1f0 ens1f1 ens1f2 ens1f3
bond-miimon 100
bond-mode balance-rr

auto vmbr1
iface vmbr1 inet static
address 10.217.10.230/24
gateway 10.217.10.1
bridge-ports bond0
bridge-stp off
bridge-fd 0

source /etc/network/interfaces.d/*
root@pve01:~#


ip a
root@pve01:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens1f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether d0:46:0c:60:4e:e2 brd ff:ff:ff:ff:ff:ff permaddr d4:04:e6:fb:c6:50
altname enp2s0f0
3: ens1f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether d0:46:0c:60:4e:e2 brd ff:ff:ff:ff:ff:ff permaddr d4:04:e6:fb:c6:51
altname enp2s0f1
4: ens1f2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether d0:46:0c:60:4e:e2 brd ff:ff:ff:ff:ff:ff permaddr d4:04:e6:fb:c6:52
altname enp2s0f2
5: ens1f3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether d0:46:0c:60:4e:e2 brd ff:ff:ff:ff:ff:ff permaddr d4:04:e6:fb:c6:53
altname enp2s0f3
6: eno8303: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether d0:46:0c:60:4e:e2 brd ff:ff:ff:ff:ff:ff
altname enp6s0f0
altname ens3f0
7: eno8403: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether d0:46:0c:60:4e:e2 brd ff:ff:ff:ff:ff:ff permaddr d0:46:0c:60:4e:e3
altname enp6s0f1
altname ens3f1
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
link/ether d0:46:0c:60:4e:e2 brd ff:ff:ff:ff:ff:ff
9: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d0:46:0c:60:4e:e2 brd ff:ff:ff:ff:ff:ff
inet 10.217.10.230/24 scope global vmbr1
valid_lft forever preferred_lft forever
inet6 fe80::d246:cff:fe60:4ee2/64 scope link
valid_lft forever preferred_lft forever
10: tap600i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
link/ether 5a:32:a7:5c:61:91 brd ff:ff:ff:ff:ff:ff
root@pve01:~#

ip route
root@pve01:~# ip route
default via 10.217.10.1 dev vmbr1 proto kernel onlink
10.217.10.0/24 dev vmbr1 proto kernel scope link src 10.217.10.230
root@pve01:~#

ping GW
root@pve01:~# ping 10.217.10.1
PING 10.217.10.1 (10.217.10.1) 56(84) bytes of data.
64 bytes from 10.217.10.1: icmp_seq=1 ttl=64 time=0.274 ms
64 bytes from 10.217.10.1: icmp_seq=2 ttl=64 time=0.358 ms
64 bytes from 10.217.10.1: icmp_seq=3 ttl=64 time=0.353 ms
64 bytes from 10.217.10.1: icmp_seq=4 ttl=64 time=0.260 ms
64 bytes from 10.217.10.1: icmp_seq=5 ttl=64 time=0.311 ms
64 bytes from 10.217.10.1: icmp_seq=6 ttl=64 time=0.325 ms
64 bytes from 10.217.10.1: icmp_seq=7 ttl=64 time=0.301 ms
64 bytes from 10.217.10.1: icmp_seq=8 ttl=64 time=0.346 ms
^C
--- 10.217.10.1 ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7145ms
rtt min/avg/max/mdev = 0.260/0.316/0.358/0.034 ms
root@pve01:~#


ping hypervisor
root@pve01:~# ping 10.217.10.230
PING 10.217.10.230 (10.217.10.230) 56(84) bytes of data.
64 bytes from 10.217.10.230: icmp_seq=1 ttl=64 time=0.013 ms
64 bytes from 10.217.10.230: icmp_seq=2 ttl=64 time=0.015 ms
64 bytes from 10.217.10.230: icmp_seq=3 ttl=64 time=0.017 ms
64 bytes from 10.217.10.230: icmp_seq=4 ttl=64 time=0.019 ms
64 bytes from 10.217.10.230: icmp_seq=5 ttl=64 time=0.023 ms
64 bytes from 10.217.10.230: icmp_seq=6 ttl=64 time=0.013 ms
64 bytes from 10.217.10.230: icmp_seq=7 ttl=64 time=0.022 ms
64 bytes from 10.217.10.230: icmp_seq=8 ttl=64 time=0.015 ms
^C
--- 10.217.10.230 ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7196ms
rtt min/avg/max/mdev = 0.013/0.017/0.023/0.003 ms
root@pve01:~#


- add second VM, can VMs ping each other?
No, the VMs cant ping each other (print results are attached)
 

Attachments

  • 2024-08-13 13_31_32-Window.png
    2024-08-13 13_31_32-Window.png
    286.3 KB · Views: 6
  • 2024-08-13 13_33_50-Window.png
    2024-08-13 13_33_50-Window.png
    298.8 KB · Views: 6
Hi @Evaldo , looks like you found font formatting in edit menu - the "CODE" tags are to the right represented by this icon: </>

At the surface your network seems ok. However, you skipped over the request for VM configuration (qm config [vmid]).

That said, since your VMs are Windows, it presents additional avenues for investigation. You should review your Ethernet adapter settings, Driver status, Firewall status, etc.

Are you doing PCI passthrough for NICs by any chance? The VM config would tell us.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I am facing the same problem, and your VM will be back to normal if you change the bond-mode balance-rr to something like,
bond-mode 802.3ad
bond-xmit-hash-policy layer2

However, my question is, balance-rr will offer better migration performance between nodes, and 802.3ad will only work faster while your switch supports lacp.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!