Hello,
I have a PVE node with 2 separate network cards:
- 1x 1NIC (embedded to motherboard)
- 1x 4NIC (PCI Intel i350 T4)
All NICs have a solid green led and a solid amber led. My research indicates that if the LED is amber (while ambiguous) it is usually bad networking configuration and/or lower networking speeds. I am under the impression that this changed suddenly and before it was all greens, but I could be wrong. In any case it bugs me and I would like to fix it, if possible.
If you are already suspecting faulty hardware that makes two of us, specifically for the Intel since it's a refurbished one. But the on-board one I have full faith that it works fine. Not to mention it's highly unlikely that all of them became faulty at the same time.
I am not a Linux-guy by any means, but I tried to verify my speed through simple means. I connected to my node via Filezilla and initiated a large file transfer. It gave 104-115MiB/s consistently which I think is fine given I have a Gigabit home network.
I also tried playing with several ports and cables around. Few of the tests I did:
- Passthrough NICs directly to Node/VMs (without VMBR)
- Connect server directly to switch (not through keystone on patch panel)
- Different cables (I also used cables that were pre-terminated and haven't terminated myself - just in case I'm useless)
- Other NIC ports which were unused so far
Nothing seems to alter the server's NICs' colors. The switch to where my node is connected to is a TL-SG108E and according to its user guide it has a solid green led on its end which translates to Gigabit connection. Furthermore I did run some tests through its internal menu regarding cable detection and nothing came back faulty. Generally all seem good with the exception of the coloring.
Is there a more Linux-y and correctt way to approach this. Run some tools. Get some diagnostics? And somehow reach a conclusion.
I've tried to ask chatGPT, it suggested the following commands:
ifconfig
ip addr show
ethtool <interface_name>
route -n
ip route show
netstat -s dmesg | grep <your_nic>
ethtool -S <interface_name>
From these, those that returned yielded the following results:
I am very willing to provide any other information required.
Regards,
G
I have a PVE node with 2 separate network cards:
- 1x 1NIC (embedded to motherboard)
- 1x 4NIC (PCI Intel i350 T4)
All NICs have a solid green led and a solid amber led. My research indicates that if the LED is amber (while ambiguous) it is usually bad networking configuration and/or lower networking speeds. I am under the impression that this changed suddenly and before it was all greens, but I could be wrong. In any case it bugs me and I would like to fix it, if possible.
If you are already suspecting faulty hardware that makes two of us, specifically for the Intel since it's a refurbished one. But the on-board one I have full faith that it works fine. Not to mention it's highly unlikely that all of them became faulty at the same time.
I am not a Linux-guy by any means, but I tried to verify my speed through simple means. I connected to my node via Filezilla and initiated a large file transfer. It gave 104-115MiB/s consistently which I think is fine given I have a Gigabit home network.
I also tried playing with several ports and cables around. Few of the tests I did:
- Passthrough NICs directly to Node/VMs (without VMBR)
- Connect server directly to switch (not through keystone on patch panel)
- Different cables (I also used cables that were pre-terminated and haven't terminated myself - just in case I'm useless)
- Other NIC ports which were unused so far
Nothing seems to alter the server's NICs' colors. The switch to where my node is connected to is a TL-SG108E and according to its user guide it has a solid green led on its end which translates to Gigabit connection. Furthermore I did run some tests through its internal menu regarding cable detection and nothing came back faulty. Generally all seem good with the exception of the coloring.
Is there a more Linux-y and correctt way to approach this. Run some tools. Get some diagnostics? And somehow reach a conclusion.
I've tried to ask chatGPT, it suggested the following commands:
ifconfig
ip addr show
ethtool <interface_name>
route -n
ip route show
netstat -s dmesg | grep <your_nic>
ethtool -S <interface_name>
From these, those that returned yielded the following results:
Code:
root@ip-10-8-0-2:~# ifconfig
ip addr show
ethtool enp5s0
route -n
ip route show
netstat -s dmesg | grep enp5s0
ethtool -S enp5s0
-bash: ifconfig: command not found
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
link/ether 90:2b:34:3d:06:cc brd ff:ff:ff:ff:ff:ff
3: enp7s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP group default qlen 1000
link/ether 90:e2:ba:48:c2:40 brd ff:ff:ff:ff:ff:ff
4: enp7s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr2 state UP group default qlen 1000
link/ether 90:e2:ba:48:c2:41 brd ff:ff:ff:ff:ff:ff
5: enp7s0f2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 90:e2:ba:48:c2:42 brd ff:ff:ff:ff:ff:ff
inet 10.8.0.2/24 scope global enp7s0f2
valid_lft forever preferred_lft forever
inet6 fe80::92e2:baff:fe48:c242/64 scope link
valid_lft forever preferred_lft forever
6: enp7s0f3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 90:e2:ba:48:c2:43 brd ff:ff:ff:ff:ff:ff
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 90:2b:34:3d:06:cc brd ff:ff:ff:ff:ff:ff
inet 10.8.0.2/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::922b:34ff:fe3d:6cc/64 scope link
valid_lft forever preferred_lft forever
8: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 90:e2:ba:48:c2:40 brd ff:ff:ff:ff:ff:ff
inet6 fdc8:79af:648d:ef9e:92e2:baff:fe48:c240/64 scope global dynamic mngtmpaddr
valid_lft 1766sec preferred_lft 1766sec
inet6 fe80::92e2:baff:fe48:c240/64 scope link
valid_lft forever preferred_lft forever
9: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 90:e2:ba:48:c2:41 brd ff:ff:ff:ff:ff:ff
inet6 fe80::92e2:baff:fe48:c241/64 scope link
valid_lft forever preferred_lft forever
10: tap100i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master fwbr100i1 state UNKNOWN group default qlen 1000
link/ether ca:ca:5b:ee:0e:b8 brd ff:ff:ff:ff:ff:ff
11: fwbr100i1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether f2:94:63:05:b9:33 brd ff:ff:ff:ff:ff:ff
12: fwpr100p1@fwln100i1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state UP group default qlen 1000
link/ether 26:68:1b:51:9b:8f brd ff:ff:ff:ff:ff:ff
13: fwln100i1@fwpr100p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i1 state UP group default qlen 1000
link/ether f2:94:63:05:b9:33 brd ff:ff:ff:ff:ff:ff
14: tap100i2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master fwbr100i2 state UNKNOWN group default qlen 1000
link/ether 72:b0:99:28:62:b3 brd ff:ff:ff:ff:ff:ff
15: fwbr100i2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 16:33:16:80:14:72 brd ff:ff:ff:ff:ff:ff
16: fwpr100p2@fwln100i2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
link/ether 7a:c7:52:4a:69:02 brd ff:ff:ff:ff:ff:ff
17: fwln100i2@fwpr100p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i2 state UP group default qlen 1000
link/ether 16:33:16:80:14:72 brd ff:ff:ff:ff:ff:ff
18: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr101i0 state UNKNOWN group default qlen 1000
link/ether 12:92:66:63:57:2a brd ff:ff:ff:ff:ff:ff
19: fwbr101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d6:b3:2d:c6:71:70 brd ff:ff:ff:ff:ff:ff
20: fwpr101p0@fwln101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 66:59:53:a8:fe:4a brd ff:ff:ff:ff:ff:ff
21: fwln101i0@fwpr101p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr101i0 state UP group default qlen 1000
link/ether d6:b3:2d:c6:71:70 brd ff:ff:ff:ff:ff:ff
Settings for enp5s0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 1000Mb/s
Duplex: Full
Auto-negotiation: on
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
MDI-X: Unknown
Supports Wake-on: pg
Wake-on: d
Current message level: 0x0000003f (63)
drv probe link timer ifdown ifup
Link detected: yes
-bash: route: command not found
default via 10.8.0.1 dev vmbr0 proto kernel onlink
10.8.0.0/24 dev vmbr0 proto kernel scope link src 10.8.0.2
10.8.0.0/24 dev enp7s0f2 proto kernel scope link src 10.8.0.2 linkdown
-bash: netstat: command not found
no stats available
I am very willing to provide any other information required.
Regards,
G