Preface: I am not a linux/unix guru, but I can figure things out and googlefoo for the most part... I've only added the components of my network that are valid for this post.
Configuration:
*Server 00
2x Xeon E5-2678V3, 256GB ram, Intel 540-T2 10 GbE, Intel I350 2x 1GbE and onboard 540-T2 GbE, Proxmox 6.2-4 all updates for community installed.
*Server 01
E5-2678V3, 128GB ram, onboard 540-T2 GbE, Proxmox 6.2-4 all updates for community installed.
Network device map:
*PFsense router @ 10.10.100.1, running squid and ids/ips, pfblocker, etc.
*Aruba S2500-24P @ 10.10.100.2 ,switch ports 0/1/0 - 0/1/3 (4x sfp+ ports with 10GbE trans) & ports 0/0/0-0/0/23 (1GbE ports), all ports active on switch I deleted the stacking ports so all ports are together.
*Wireless AP @ 10.10.100.3 attached to Aruba 1G port 0/0/0 w/poe.
*Windows PC @ 10.10.100.112 on Aruba port 0/1/0 w/trans sfp+ 10GbE with intel 10GbE x540-t2.
*Windows PC @ 10.10.100.119 via wireless AP
*Windows VM on proxmox @ 10.10.100.227
*Server 00
ENO1 = 1st port on 540-t2 10GbE
ENO2 = 2nd port on 540-t2 10GbE
ENS8f0 = 1G
ENS8f1 = 1G
ENS3f0 = 10G
ENS3f1 = 10G
Linux Bridge = vmbr0, ENO1 was added ipv4 specified is 10.10.100.10 (I tried it with the gateway set to 10.10.100.1 and without).
Linux Bridge = vmbr1, ENO2 was added, no IP or gateway specified.
IPv6 is enabled on all devices including gateway, switch, etc. But is ignored.
Problem:
When ENO1 10GbE on server is plugged into Aruba switch via SFP+ 10G trans (0/1/1) and is part of linux bridge vmbr0 I can not access the proxmox webui or the webui for FreeNAS (also on vmbr0). When I simply move the ethernet cable on the switch from a 10G port to a 1G port I can then access the webui for proxmox and the webui for one of on of my vm's FreeNAS.
I can still ping from my PC to everything, including the VM's and Hypervisor when plugged into the 10GbE sfp+ port on the Aruba switch.
Sanity Checks:
At no time does proxmox loose the ability to ping the network, ping the internet, or be pinged by a pc on the network.
1: I passed ENS3f0 (10GbE) through to my freeNAS vm via PCI passthrough, I can access the freenas webui and have full 10G speeds. I repeated this experiment with 3 other vm's same results, everything works including SMB shares and webui's.
2: I established vmbr1 with ENO2, no ip or gateway specified and changed the network device for my freenas vm and deleted the pci passthrough so only vmbr1 was attached to the freenas vm. No access to webui and windows SMB was not reachable.
3: I fired up a nearly identical server and installed proxmox, same issue no webui when plugged I had ENO1 in vmbr0 attached to aruba switch on 10GbE via sfp+ trans, but when I move the eth cable on the switch from 10G sfp+ to 1G port webui becomes available.
4: RDP from my PC to windows VM (attached to vmbr0) works fine. That windows VM can see the webui for both proxmox and FreeNAS when they are attached to vmbr0.
5: I tried different sfp+ ports on the Aruba switch trying to see if it was a transceiver issue, swapped every port and transceiver around.
6: IP ADDR via CLI on the hypervisor shows standard output as: "ENO1: <Broadcast, MultiCast, Up, Lower_up> MTU 1500 qdisc mq master vmbr0 state up group default qlen 1000". & "VMBR0 = <Broadcast, MultiCast, Up, Lower_up> MTU 1500 qdisc mq master vmbr0 state up group default qlen 1000"
Both vmbr0 and ENO1 have the same mac address, and vmbr0 is calling out specified ipv4 address as "inet 10.10.100.10/24 brd 10.10.100.255 scope global vmbr0"
7: swapped out patch cables, while changing out transceivers and switch port locations, used known working patch cables @ 10G.
8: Used both Virtio and Intel E1000 on VM network device, virtio is enabled on CPU and in bios.
9: Turned off squid, pfblocker and ids/ips services on pfsense router, no changes.
When "SS | grep 8006 was entered while attempting to connect to the proxmox webui from my PC while proxmox was connected to 10GbE sfp+ on Aruba switch, the following outputs were observed:
tcp CLOSING 10.10.100.10:8006 1 1776 10.10.100.119:51119 (10.10.100.119 windows laptop, successful ping to hypervisor, no webui or SMB share access).
tcp ESTAB 10.10.100.10:8006 0 0 10.10.100.227:50170 (10.10.100.227 VM had webui and SMB access).
tcp FIN-WAIT-1 10.10.100.10:8006 0 1776 10.10.100.112:57761 (10.10.100.112 PC did not have webui or SMB access)
When "SS | grep 8006 was ran a few moments later:
tcp CLOSING 10.10.100.10:8006 1 1776 10.10.100.119:51119 (10.10.100.119 windows laptop, successful ping to hypervisor, no webui or SMB share access).
tcp ESTAB 10.10.100.10:8006 0 0 10.10.100.227:50170 (10.10.100.227 VM had webui and SMB access).
tcp Closing 10.10.100.10:8006 1 1776 10.10.100.112:57761 (10.10.100.112 PC did not have webui or SMB access)
Anyone wanna point me in the right direction to get the webui working while my hypervisor is connected to the 10GbE sfp+ port on my Aruba switch?
Its either got to be a switch settings issue or a proxmox settings issue, because I think I've swapped around and eliminated most of the other variables, seems odd that my PC connected to 10G on the switch has no issues and when I pci passthrough a 10GbE device to a VM, that VM has no issues. Which could mean its just a proxmox issue???
Your help is greatly appreciated.
Configuration:
*Server 00
2x Xeon E5-2678V3, 256GB ram, Intel 540-T2 10 GbE, Intel I350 2x 1GbE and onboard 540-T2 GbE, Proxmox 6.2-4 all updates for community installed.
*Server 01
E5-2678V3, 128GB ram, onboard 540-T2 GbE, Proxmox 6.2-4 all updates for community installed.
Network device map:
*PFsense router @ 10.10.100.1, running squid and ids/ips, pfblocker, etc.
*Aruba S2500-24P @ 10.10.100.2 ,switch ports 0/1/0 - 0/1/3 (4x sfp+ ports with 10GbE trans) & ports 0/0/0-0/0/23 (1GbE ports), all ports active on switch I deleted the stacking ports so all ports are together.
*Wireless AP @ 10.10.100.3 attached to Aruba 1G port 0/0/0 w/poe.
*Windows PC @ 10.10.100.112 on Aruba port 0/1/0 w/trans sfp+ 10GbE with intel 10GbE x540-t2.
*Windows PC @ 10.10.100.119 via wireless AP
*Windows VM on proxmox @ 10.10.100.227
*Server 00
ENO1 = 1st port on 540-t2 10GbE
ENO2 = 2nd port on 540-t2 10GbE
ENS8f0 = 1G
ENS8f1 = 1G
ENS3f0 = 10G
ENS3f1 = 10G
Linux Bridge = vmbr0, ENO1 was added ipv4 specified is 10.10.100.10 (I tried it with the gateway set to 10.10.100.1 and without).
Linux Bridge = vmbr1, ENO2 was added, no IP or gateway specified.
IPv6 is enabled on all devices including gateway, switch, etc. But is ignored.
Problem:
When ENO1 10GbE on server is plugged into Aruba switch via SFP+ 10G trans (0/1/1) and is part of linux bridge vmbr0 I can not access the proxmox webui or the webui for FreeNAS (also on vmbr0). When I simply move the ethernet cable on the switch from a 10G port to a 1G port I can then access the webui for proxmox and the webui for one of on of my vm's FreeNAS.
I can still ping from my PC to everything, including the VM's and Hypervisor when plugged into the 10GbE sfp+ port on the Aruba switch.
Sanity Checks:
At no time does proxmox loose the ability to ping the network, ping the internet, or be pinged by a pc on the network.
1: I passed ENS3f0 (10GbE) through to my freeNAS vm via PCI passthrough, I can access the freenas webui and have full 10G speeds. I repeated this experiment with 3 other vm's same results, everything works including SMB shares and webui's.
2: I established vmbr1 with ENO2, no ip or gateway specified and changed the network device for my freenas vm and deleted the pci passthrough so only vmbr1 was attached to the freenas vm. No access to webui and windows SMB was not reachable.
3: I fired up a nearly identical server and installed proxmox, same issue no webui when plugged I had ENO1 in vmbr0 attached to aruba switch on 10GbE via sfp+ trans, but when I move the eth cable on the switch from 10G sfp+ to 1G port webui becomes available.
4: RDP from my PC to windows VM (attached to vmbr0) works fine. That windows VM can see the webui for both proxmox and FreeNAS when they are attached to vmbr0.
5: I tried different sfp+ ports on the Aruba switch trying to see if it was a transceiver issue, swapped every port and transceiver around.
6: IP ADDR via CLI on the hypervisor shows standard output as: "ENO1: <Broadcast, MultiCast, Up, Lower_up> MTU 1500 qdisc mq master vmbr0 state up group default qlen 1000". & "VMBR0 = <Broadcast, MultiCast, Up, Lower_up> MTU 1500 qdisc mq master vmbr0 state up group default qlen 1000"
Both vmbr0 and ENO1 have the same mac address, and vmbr0 is calling out specified ipv4 address as "inet 10.10.100.10/24 brd 10.10.100.255 scope global vmbr0"
7: swapped out patch cables, while changing out transceivers and switch port locations, used known working patch cables @ 10G.
8: Used both Virtio and Intel E1000 on VM network device, virtio is enabled on CPU and in bios.
9: Turned off squid, pfblocker and ids/ips services on pfsense router, no changes.
When "SS | grep 8006 was entered while attempting to connect to the proxmox webui from my PC while proxmox was connected to 10GbE sfp+ on Aruba switch, the following outputs were observed:
tcp CLOSING 10.10.100.10:8006 1 1776 10.10.100.119:51119 (10.10.100.119 windows laptop, successful ping to hypervisor, no webui or SMB share access).
tcp ESTAB 10.10.100.10:8006 0 0 10.10.100.227:50170 (10.10.100.227 VM had webui and SMB access).
tcp FIN-WAIT-1 10.10.100.10:8006 0 1776 10.10.100.112:57761 (10.10.100.112 PC did not have webui or SMB access)
When "SS | grep 8006 was ran a few moments later:
tcp CLOSING 10.10.100.10:8006 1 1776 10.10.100.119:51119 (10.10.100.119 windows laptop, successful ping to hypervisor, no webui or SMB share access).
tcp ESTAB 10.10.100.10:8006 0 0 10.10.100.227:50170 (10.10.100.227 VM had webui and SMB access).
tcp Closing 10.10.100.10:8006 1 1776 10.10.100.112:57761 (10.10.100.112 PC did not have webui or SMB access)
Anyone wanna point me in the right direction to get the webui working while my hypervisor is connected to the 10GbE sfp+ port on my Aruba switch?
Its either got to be a switch settings issue or a proxmox settings issue, because I think I've swapped around and eliminated most of the other variables, seems odd that my PC connected to 10G on the switch has no issues and when I pci passthrough a 10GbE device to a VM, that VM has no issues. Which could mean its just a proxmox issue???
Your help is greatly appreciated.
Last edited: