Dedicated Proxmox network interface connectivity issue

AndyKosh

New Member
Nov 10, 2025
19
2
3
I've created dedicated network for NFS in 3 Proxmox nodes. Created a bridge/switch (vmbr1), assigned an interface (enp5s0), created Linux VLAN configured IP without gateway & Vlan raw device in vmbr2 in it. All of them nodes can ping each others NFS interfaces but not the other network. I'm not able to ping the NFS storage that's on a different VLAN. Need to know, if am I missing anything with my configuration.
 
Hi @AndyKosh ,

It is impossible to draw any conclusions about the behavior of your network without detailed technical information about your environment. This should include network configuration files, the current state of the systems involved, NAS network settings, routing tables, ping outputs, and any other relevant diagnostics.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I've added 8 interfaces enp5s0-enp12s0, connected through Cisco UCS (4 interfaces per uplink switch for redundancy).
Added 4 bridge/switch vmbr0-vmbr3.

1. vmbr0 -> bond0 (enp9s0/enp10s0) -> vmbr0.227 (proxmox mgmt.)
2. vmbr1 -> enp7s0 -> for VM traffic
3. vmbr2 -> bond2 (enp5s0/enp6s0) -> created sub interface NFS and added vmbr2 as raw devise mapping. vlan 1921
4. vmbr3 -> bond3 (enp11s0/enp12s0) -> created sub interface Migration and added vmbr3 as raw devise mapping


=~=~=~=~=~=~=~=~=~=~=~= PuTTY log 2026.01.29 05:33:03 =~=~=~=~=~=~=~=~=~=~=~=

[?2004hroot@uxus1vsp1pxm003:~# ip add
[?2004l
1: [36mlo: [0m<LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback [33m00:00:00:00:00:00[0m brd [33m00:00:00:00:00:00[0m
inet [35m127.0.0.1[0m/8 scope host lo
valid_lft forever preferred_lft forever
inet6 [34m::1[0m/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: [36menp5s0: [0m<BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond2 state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:4f[0m brd [33mff:ff:ff:ff:ff:ff[0m
altname enx0025b500004f
3: [36menp6s0: [0m<BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond2 state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:4f[0m brd [33mff:ff:ff:ff:ff:ff[0m permaddr [33m00:25:b5:00:00:3f[0m
altname enx0025b500003f
4: [36menp7s0: [0m<BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:4e[0m brd [33mff:ff:ff:ff:ff:ff[0m
altname enx0025b500004e
5: [36menp8s0: [0m<BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:4e[0m brd [33mff:ff:ff:ff:ff:ff[0m permaddr [33m00:25:b5:00:00:5e[0m
altname enx0025b500005e
6: [36menp9s0: [0m<BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:5d[0m brd [33mff:ff:ff:ff:ff:ff[0m permaddr [33m00:25:b5:00:00:4d[0m
altname enx0025b500004d
7: [36menp10s0: [0m<BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:5d[0m brd [33mff:ff:ff:ff:ff:ff[0m
altname enx0025b500005d
8: [36menp11s0: [0m<BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond3 state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:0c[0m brd [33mff:ff:ff:ff:ff:ff[0m
altname enx0025b500000c
9: [36menp12s0: [0m<BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond3 state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:0c[0m brd [33mff:ff:ff:ff:ff:ff[0m permaddr [33m00:25:b5:00:00:1c[0m
altname enx0025b500001c
25: [36mbond0: [0m<BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:5d[0m brd [33mff:ff:ff:ff:ff:ff[0m
26: [36mvmbr0: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:5d[0m brd [33mff:ff:ff:ff:ff:ff[0m
inet6 [34mfe80::225:b5ff:fe00:5d[0m/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
27: [36mvmbr0.227@vmbr0: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:5d[0m brd [33mff:ff:ff:ff:ff:ff[0m
inet [35m10.71.227.115[0m/24 scope global vmbr0.227
valid_lft forever preferred_lft forever
inet6 [34mfe80::225:b5ff:fe00:5d[0m/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
28: [36mbond2: [0m<BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:4f[0m brd [33mff:ff:ff:ff:ff:ff[0m
29: [36mvmbr2: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:4f[0m brd [33mff:ff:ff:ff:ff:ff[0m
inet6 [34mfe80::225:b5ff:fe00:4f[0m/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
30: [36mNFS@vmbr2: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:4f[0m brd [33mff:ff:ff:ff:ff:ff[0m
inet [35m10.192.8.238[0m/22 scope global NFS
valid_lft forever preferred_lft forever
inet6 [34mfe80::225:b5ff:fe00:4f[0m/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
31: [36mbond3: [0m<BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr3 state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:0c[0m brd [33mff:ff:ff:ff:ff:ff[0m
32: [36mvmbr3: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:0c[0m brd [33mff:ff:ff:ff:ff:ff[0m
inet6 [34mfe80::225:b5ff:fe00:c[0m/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
33: [36mMigration@vmbr3: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:0c[0m brd [33mff:ff:ff:ff:ff:ff[0m
inet [35m10.73.2.153[0m/24 scope global Migration
valid_lft forever preferred_lft forever
inet6 [34mfe80::225:b5ff:fe00:c[0m/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
34: [36mbond1: [0m<BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:4e[0m brd [33mff:ff:ff:ff:ff:ff[0m
35: [36mvmbr1: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:4e[0m brd [33mff:ff:ff:ff:ff:ff[0m
inet6 [34mfe80::225:b5ff:fe00:4e[0m/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
36: [36mvmbr1.1921@vmbr1: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master V1921 state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:4e[0m brd [33mff:ff:ff:ff:ff:ff[0m
37: [36mV1921: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:4e[0m brd [33mff:ff:ff:ff:ff:ff[0m
inet6 [34mfe80::225:b5ff:fe00:4e[0m/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
38: [36mvmbr1.227@vmbr1: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master V227 state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:4e[0m brd [33mff:ff:ff:ff:ff:ff[0m
39: [36mV227: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state [32mUP [0mgroup default qlen 1000
link/ether [33m00:25:b5:00:00:4e[0m brd [33mff:ff:ff:ff:ff:ff[0m
inet6 [34mfe80::225:b5ff:fe00:4e[0m/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever

[?2004hroot@uxus1vsp1pxm003:~# cat /etc/network/interfaces


auto lo
iface lo inet loopback

auto enp7s0
iface enp7s0 inet manual

auto enp5s0
iface enp5s0 inet manual

auto enp6s0
iface enp6s0 inet manual

auto enp8s0
iface enp8s0 inet manual

auto enp9s0
iface enp9s0 inet manual

auto enp10s0
iface enp10s0 inet manual

auto enp11s0
iface enp11s0 inet manual

auto enp12s0
iface enp12s0 inet manual

auto bond0
iface bond0 inet manual
bond-slaves enp10s0 enp9s0
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2

auto bond1
iface bond1 inet manual
bond-slaves enp7s0 enp8s0
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2

auto bond2
iface bond2 inet manual
bond-slaves enp5s0 enp6s0
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2

auto bond3
iface bond3 inet manual
bond-slaves enp11s0 enp12s0
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2

auto vmbr0
iface vmbr0 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4092

auto vmbr0.227
iface vmbr0.227 inet static
address 10.71.227.115/24
gateway 10.71.227.254

auto vmbr1
iface vmbr1 inet manual
bridge-ports bond1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

auto vmbr2
iface vmbr2 inet manual
bridge-ports bond2
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

auto vmbr3
iface vmbr3 inet manual
bridge-ports bond3
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

auto NFS
iface NFS inet static
address 10.192.8.238/22
vlan-id 1921
vlan-raw-device vmbr2

auto Migration
iface Migration inet static
address 10.73.2.153/24
vlan-id 302
vlan-raw-device vmbr3

source /etc/network/interfaces.d/*
[?2004hroot@uxus1vsp1pxm003:~#
 
@AndyKosh ,

There is a lot going on here, and none of it is related to PVE. This is purely Linux network management.

The output you provided is extremely difficult to read. You must have noticed the ASCII encoding issues in your copy/paste. The forum allows for flexible text formatting using CODE </> tags as one option. In many cases, you can also paste plain text using Ctrl + Shift + V.

In all of the output, you only mentioned that the “NFS storage is on a different VLAN” - different from what? Which VLAN? Why do you believe you should be able to ping or access that VLAN if your hosts' NFS interfaces are on a non-routable subnet? Should this NAS be accessible via the management network (VLAN 227)? Did I miss something? If so, I apologize.

Is this is a business environment? In any case, I strongly recommend stripping everything down to the basics and then rebuilding the configuration layer by layer.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox