Hi
I bought a new, faster NIC for my home server. After installing it and updating vmbr0 to work with the new interface enp1s0 there seems to be no switching from vms to my lan and vice versa. vms can ping proxmox host, proxmox host can ping vms. proxmox host can access lan (via vmbr0).
my network is as follows: 1 net 192.168.1.0/24, Proxmox is .50, VMs .51 and upwards
I can't ping the router (.1) from inside the vm (connected per proxmox console) nor the vm from outside
/etc/network/interfaces old:
/etc/network/interfaces new:
enp1s0 is the new interface. vmbr0 seems to be working, ich can connect to proxmox at .50 erreichen. But as described none of the VMs.
Is there some bridge behaviour which still uses the old eno1 interface? it seems like packets are dropped.
I tried to deactivate firewall, but it seems not active at all (which is fine in my home network)
output of systemctl status networking.service:
It seems there is some strange behaviour going on: As soon as I switch vmbr0 back to eno1, everything is working as desired. VMs and host accessible.
After chaning the bridge once to enp1s0, i have to restart to get it working again (even if I switch back to eno1, traffic is blocked). "apply configuration" warns me about:
Could the bridge's mac address be part of the problem? All VMs have default firewall ("no") settings, "mac filter" however is set to "yes"
the new adapter is a QNAP interface with 2.5Gb/s Intel I225-LM chip. Old adapter is the Supermicro X11 motherboard's onboard nic.
more debug output:
PS: I created this post in german (at the german part of the forum) at first, but translated it and reposted it here.
I bought a new, faster NIC for my home server. After installing it and updating vmbr0 to work with the new interface enp1s0 there seems to be no switching from vms to my lan and vice versa. vms can ping proxmox host, proxmox host can ping vms. proxmox host can access lan (via vmbr0).
my network is as follows: 1 net 192.168.1.0/24, Proxmox is .50, VMs .51 and upwards
I can't ping the router (.1) from inside the vm (connected per proxmox console) nor the vm from outside
/etc/network/interfaces old:
Code:
auto lo
iface lo inet loopback
iface eno1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.1.50
netmask 255.255.255.0
gateway 192.168.1.1
bridge_ports eno1
bridge_stp off
bridge_fd 0
iface eno2 inet manual
/etc/network/interfaces new:
Code:
auto lo
iface lo inet loopback
iface enp1s0 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.1.50
netmask 255.255.255.0
gateway 192.168.1.1
bridge-ports enp1s0
bridge-stp off
bridge-fd 0
iface eno1 inet manual
iface eno2 inet manual
Is there some bridge behaviour which still uses the old eno1 interface? it seems like packets are dropped.
I tried to deactivate firewall, but it seems not active at all (which is fine in my home network)
output of systemctl status networking.service:
Code:
● networking.service - Network initialization
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
Active: active (exited) since Wed 2021-02-10 23:42:46 CET; 1min 29s ago
Docs: man:interfaces(5)
man:ifup(8)
man:ifdown(8)
Process: 22497 ExecStart=/usr/share/ifupdown2/sbin/start-networking start (code=exited, status=0/SUCCESS)
Main PID: 22497 (code=exited, status=0/SUCCESS)
Feb 10 23:42:45 vas systemd[1]: Starting Network initialization...
Feb 10 23:42:45 vas networking[22497]: networking: Configuring network interfaces
Feb 10 23:42:46 vas systemd[1]: Started Network initialization.
It seems there is some strange behaviour going on: As soon as I switch vmbr0 back to eno1, everything is working as desired. VMs and host accessible.
After chaning the bridge once to enp1s0, i have to restart to get it working again (even if I switch back to eno1, traffic is blocked). "apply configuration" warns me about:
Code:
vmbr0 : warning: vmbr0: setting bridge mac address: cmd '/sbin/bridge fdb replace 00:00:00:00:00:00 dev vmbr0 self' failed: returned 255 (RTNETLINK answers: Invalid argument
Could the bridge's mac address be part of the problem? All VMs have default firewall ("no") settings, "mac filter" however is set to "yes"
the new adapter is a QNAP interface with 2.5Gb/s Intel I225-LM chip. Old adapter is the Supermicro X11 motherboard's onboard nic.
more debug output:
lspci -v of both cards, first is the new one:
ethtool:
Code:
01:00.0 Ethernet controller: Intel Corporation Device 15f2 (rev 03)
Subsystem: QNAP Systems, Inc. Device c001
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at 91200000 (32-bit, non-prefetchable) [size=1M]
Memory at 91300000 (32-bit, non-prefetchable) [size=16K]
Expansion ROM at 91100000 [disabled] [size=1M]
Capabilities: [40] Power Management version 3
Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
Capabilities: [a0] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [140] Device Serial Number [...]
Capabilities: [1c0] Latency Tolerance Reporting
Capabilities: [1f0] Precision Time Measurement
Capabilities: [1e0] L1 PM Substates
Kernel driver in use: igc
Kernel modules: igc
03:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
Subsystem: Super Micro Computer Inc I210 Gigabit Network Connection
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at 91500000 (32-bit, non-prefetchable) [size=512K]
I/O ports at 5000 [size=32]
Memory at 91580000 (32-bit, non-prefetchable) [size=16K]
Capabilities: [40] Power Management version 3
Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
Capabilities: [a0] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [140] Device Serial Number [...]
Capabilities: [1a0] Transaction Processing Hints
Kernel driver in use: igb
Kernel modules: igb
ethtool:
Code:
Settings for eno1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: off (auto)
Supports Wake-on: pumbg
Wake-on: g
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes
Settings for enp1s0:
Supported ports: [ ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
2500baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
2500baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 2500Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
MDI-X: off (auto)
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes
PS: I created this post in german (at the german part of the forum) at first, but translated it and reposted it here.