Cannot reach VMs on the local network but VMs can connect to internet

J-L-A

New Member
Apr 24, 2025
4
0
1
New to Proxmox and little knowledge on Linux, mostly worked on Windows Server Hyper-V at work.

I am greatly in need some expert help on my new home setup where the VMs are able to get DHCP IP from existing router and can access the internet BUT I cannot access or even ping the VM ip addresses. I have looked into this much of yesterday, but can't figure out any tangible solutions.

This is a single Proxmox instance for our home that will be used to run OPNSense but will be adding more vms and containers once I have figured out the connectivity issue. This hardware/firewall setup will be a replacement for my current TPLink router/firewall and Fiber gateway. I even created a tiny windows 11 VM to see if it replicates the issue and Im more familiar trouble shooting windows OS. Sad to say I ended up getting frustrated as it is the same with the windows vm.

Hardware setup is as follows:
Onboard Ethernet ports
enp88s0 -- vmbr0, will also be used by the VMs as the management interface, IP Address 192.168.68.200
enp91s0 -- spare not used atm

Onboard Intel x710 (2x SFP+)
enp3s0f0np0 - used as PCI passthru on OPNSense for WAN
enp3s0f1npl - spare not used atm

Add-on Intel X520-X2 (Used as LACP, bond0/vmbr1) connecting to switch with LACP ports
enpls0fo
enpls0f1

Current VMs:
OPNSense - IP address 192.168.68.254
Win11 - IP address 192.168.68.223

* both VM CAN connect to the internet, ping 8.8.8.8 or any website
* both VM cannot ping each other
* I can access/login to Proxmox IP, and Proxmox have internet access via vmbr0
* Proxmox shell cannot ping any VM IP
* local network cannot ping any VM or open OPNSense IP address
EDIT: * VMs can ping proxmox host and other systems on the same network

Ping from proxmox shell
Code:
root@pve_gate:~# ping 192.168.68.254
PING 192.168.68.254 (192.168.68.254) 56(84) bytes of data.
^C
--- 192.168.68.254 ping statistics ---
6 packets transmitted, 0 received, 100% packet loss, time 5159ms


ip a
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: enp88s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 58:47:ca:76:66:a5 brd ff:ff:ff:ff:ff:ff
3: enp91s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 58:47:ca:76:66:a6 brd ff:ff:ff:ff:ff:ff
4: enp1s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether a0:36:9f:1b:b7:6c brd ff:ff:ff:ff:ff:ff
6: enp1s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether a0:36:9f:1b:b7:6c brd ff:ff:ff:ff:ff:ff permaddr a0:36:9f:1b:b7:6e
8: wlp92s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 4c:50:dd:3d:27:50 brd ff:ff:ff:ff:ff:ff
9: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 58:47:ca:76:66:a5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.68.200/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::5a47:caff:fe76:66a5/64 scope link
       valid_lft forever preferred_lft forever
10: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether a0:36:9f:1b:b7:6c brd ff:ff:ff:ff:ff:ff
11: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a0:36:9f:1b:b7:6c brd ff:ff:ff:ff:ff:ff
    inet6 2600:1700:4ca1:a34e:a236:9fff:fe1b:b76c/64 scope global dynamic mngtmpaddr
       valid_lft 85934sec preferred_lft 13934sec
    inet6 fe80::a236:9fff:fe1b:b76c/64 scope link
       valid_lft forever preferred_lft forever
12: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UNKNOWN group default qlen 1000
    link/ether b6:c0:4e:79:2b:8e brd ff:ff:ff:ff:ff:ff
13: tap100i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether ce:14:d4:ab:98:12 brd ff:ff:ff:ff:ff:ff
14: tap200i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 0e:0c:a4:cf:b6:b2 brd ff:ff:ff:ff:ff:ff

ip r
Code:
default via 192.168.68.1 dev vmbr0 proto kernel onlink
192.168.68.0/24 dev vmbr0 proto kernel scope link src 192.168.68.200


cat /etc/network/interfaces
Code:
auto lo
iface lo inet loopback

iface enp88s0 inet manual

iface enp91s0 inet manual

iface enp3s0f0np0 inet manual

auto enp1s0f0
iface enp1s0f0 inet manual

iface enp3s0f1np1 inet manual

auto enp1s0f1
iface enp1s0f1 inet manual

iface wlp92s0 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves enp1s0f0 enp1s0f1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer3+4
#LAN Trunk

auto vmbr0
iface vmbr0 inet static
        address 192.168.68.200/24
        gateway 192.168.68.1
        bridge-ports enp88s0
        bridge-stp off
        bridge-fd 0
#MGMT

auto vmbr1
iface vmbr1 inet manual
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 10 20 30 40 68
#LACP Bridge

source /etc/network/interfaces.d/*
 
Last edited:
usually if your on the same DHCP and internet is working it is the router/firewall/software firewall that is causing the problem, it will likely be client isolation or the need to allow the traffic via a firewall rule, windows firewall rule, etc.

you also typically want to use both nics for opnsense/pfsense, etc. one needs to be wan the other lan, you probably want all the vms and network on the same one currently being shared to the vms, but this also means you need a wifi router, switch, etc to link it out to the network to allow external access into opnsense and the VMs then the WAN straight to your modem, etc,
 
usually if your on the same DHCP and internet is working it is the router/firewall/software firewall that is causing the problem, it will likely be client isolation or the need to allow the traffic via a firewall rule, windows firewall rule, etc.

you also typically want to use both nics for opnsense/pfsense, etc. one needs to be wan the other lan, you probably want all the vms and network on the same one currently being shared to the vms, but this also means you need a wifi router, switch, etc to link it out to the network to allow external access into opnsense and the VMs then the WAN straight to your modem, etc,
I think so too this is mostlikely a firewall issue on the Proxmox VE BUT have tried turning off the firewall on Datacenter, host and VMs and nothing still goes thru. I am missing something.. I dont think this is related to windows firewall rule as initial install of windows allows ICMP traffic to go thru. EDIT: ICMP response from windows

My initial problem is connecting to the console of OPNSense via vmbr0 (IP Address 192.168.68.254), if I cannot connect to the webUI of OPNSense VM, its going to be hard to configure OPNSense. This is an initial proof of concept for me to use Proxmox to host OPNSense before I do a final transition. I have already allocated enp3s0f0np0 as the WAN, its configured as PCI Passthru since this is where the XGS-PON will be connected. LAN will be going thru the LACP.
 
Last edited:
I think so too this is mostlikely a firewall issue on the Proxmox VE BUT have tried turning off the firewall on Datacenter, host and VMs and nothing still goes thru. I am missing something.. I dont think this is related to windows firewall rule as initial install of windows allows ICMP traffic to go thru. EDIT: ICMP response from windows

My initial problem is connecting to the console of OPNSense via vmbr0 (IP Address 192.168.68.254), if I cannot connect to the webUI of OPNSense VM, its going to be hard to configure OPNSense. This is an initial proof of concept for me to use Proxmox to host OPNSense before I do a final transition. I have already allocated enp3s0f0np0 as the WAN, its configured as PCI Passthru since this is where the XGS-PON will be connected. LAN will be going thru the LACP.
it is likely something in your network setup or opnsense outside of proxmox itself.

i am not sure what the issue would be here, i have a pfsense instance on pfsense on a tinypc using proxmox and i just setup the 2 lans in proxmox then setup one nic in the VM for each physical NIC to be wan/lan and it worked fine.

do your other PCs, etc, on your network connect to one another fine, you can obviously connect into proxmox itself fine, etc? (if yes, which seems the case, it seems something is wrong with your opnsense install? maybe check all your settings or try again with a new vm? )
 
it is likely something in your network setup or opnsense outside of proxmox itself.

i am not sure what the issue would be here, i have a pfsense instance on pfsense on a tinypc using proxmox and i just setup the 2 lans in proxmox then setup one nic in the VM for each physical NIC to be wan/lan and it worked fine.

do your other PCs, etc, on your network connect to one another fine, you can obviously connect into proxmox itself fine, etc? (if yes, which seems the case, it seems something is wrong with your opnsense install? maybe check all your settings or try again with a new vm? )
yes, all PC that is connected to the same network can talk to each other. As I write this, im using a laptop and can connect to the proxmox host which is on the same lan. Technically, both the VMs should also be able to respond to ICMP since they are on the same network. What confuses me is that both OPNSense and Win11 inside proxmox is able to get a dhcp IP from the router on the same network.
I just realized and tested that both VMs can get ping response from any system on the network. Also tried logging in to the Proxmox webgui from the windows VM that is running inside proxmox.
 
So it seems like 2 different issues:
Win11 - the icmp was blocked and enabling Core Networking in the firewall settings I was able to get ping response on the single network (vmbr0)

For OPNSense, its confusing for me.... Did some experimenting and found that if I ONLY connect the management interface (vmbr0) i was able to login to OPNsense via the web interface BUT as soon as I add vmbr1 (bond0/LACP), I could not connect to the VM even accessing the web gui of OPNSense.
I have not tried adding the PCI-Passthru yet (port 1 of X710 SFP) just to avoid multiple ports.

Any insights as to why this is happening?