10G interface problems between nodes

jimvman

Active Member
Oct 16, 2019
17
0
41
53
Hello again everybody,
I'm having dropped packets between my ceph cluster that I recently setup between 3 nodes, where the interfaces are all 10G to a Cisco Nexus switch. The kern.log errors show this:
May 14 22:04:22 proxmox4 kernel: [1304962.931546] bnx2x 0000:01:00.0 eno1: NIC Link is Down
May 14 22:04:22 proxmox4 kernel: [1304962.933083] vmbr1: port 1(eno1) entered disabled state
May 14 22:04:23 proxmox4 kernel: [1304963.955497] bnx2x 0000:01:00.0 eno1: NIC Link is Up, 10000 Mbps full duplex, Flow control: ON - receive & transmit
May 14 22:04:23 proxmox4 kernel: [1304963.955588] vmbr1: port 1(eno1) entered blocking state
May 14 22:04:23 proxmox4 kernel: [1304963.955593] vmbr1: port 1(eno1) entered forwarding state
May 14 22:04:34 proxmox4 kernel: [1304975.414967] bnx2x 0000:01:00.0 eno1: NIC Link is Down

my interface file looks like:

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno3 inet manual

iface eno2 inet manual

iface eno4 inet manual

# 1G Interface
auto vmbr0
iface vmbr0 inet static
address 10.2.11.37/24
gateway 10.2.11.1
bridge-ports eno3
bridge-stp off
bridge-fd 0

# 10G Interface for Ceph
auto vmbr1
iface vmbr1 inet static
address 10.2.10.37/24
bridge-ports eno1
bridge-stp off
bridge-fd 0

Is there something I'm missing as to why I'm having packet loss, and drops. It's affecting the ceph cluster as it is communicating on the 10G interface. Is this an STP issue, and how do I debug or resolve? Any help would be great!

Thanks.
 
Can anyone help with this issues I'm having with dropped packets between 10G interfaces?
 
hi,

maybe it's a driver issue.. can you post the output of:
Code:
ethtool -i eno1
ethtool -k eno1
ethtool -S eno1
and pveversion -v
 
Proxmox 6.1 is installed and was going to upgrade to 6.2 but here are results of commands. The one server's 10G interface is currently down, so my stats are not all good, now as you can see. I was going to try to swap out the SFP module too. Maybe this will provide some help anyway. I can also provide the same info for proxmox5 (the other server).

1.
root@proxmox4:~# ethtool -i eno1
driver: bnx2x
version: 1.713.36-0 storm 7.13.11.0
firmware-version: FFV7.2.20 bc 7.2.25
expansion-rom-version:
bus-info: 0000:01:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

2.
root@proxmox4:~# ethtool -k eno1
Features for eno1:
rx-checksumming: off
tx-checksumming: off
tx-checksum-ipv4: off
tx-checksum-ip-generic: off [fixed]
tx-checksum-ipv6: off
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
tx-tcp-segmentation: off [requested on]
tx-tcp-ecn-segmentation: off [requested on]
tx-tcp-mangleid-segmentation: off
tx-tcp6-segmentation: off [requested on]
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
rx-vlan-offload: on [fixed]
tx-vlan-offload: on
ntuple-filters: off [fixed]
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: on
tx-gre-csum-segmentation: on
tx-ipxip4-segmentation: on
tx-ipxip6-segmentation: off [fixed]
tx-udp_tnl-segmentation: on
tx-udp_tnl-csum-segmentation: on
tx-gso-partial: on
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: off [fixed]
tx-udp-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: off [fixed]
esp-hw-offload: off [fixed]
esp-tx-csum-hw-offload: off [fixed]
rx-udp_tunnel-port-offload: on
tls-hw-tx-offload: off [fixed]
tls-hw-rx-offload: off [fixed]
rx-gro-hw: off
tls-hw-record: off [fixed]

3.
root@proxmox4:~# ethtool -S eno1
NIC statistics:
[0]: rx_bytes: 46440
[0]: rx_ucast_packets: 0
[0]: rx_mcast_packets: 169
[0]: rx_bcast_packets: 0
[0]: rx_discards: 0
[0]: rx_phy_ip_err_discards: 0
[0]: rx_skb_alloc_discard: 0
[0]: rx_csum_offload_errors: 0
[0]: tx_exhaustion_events: 0
[0]: tx_bytes: 0
[0]: tx_ucast_packets: 0
[0]: tx_mcast_packets: 0
[0]: tx_bcast_packets: 0
[0]: tpa_aggregations: 0
[0]: tpa_aggregated_frames: 0
[0]: tpa_bytes: 0
[0]: driver_filtered_tx_pkt: 0
[1]: rx_bytes: 0
[1]: rx_ucast_packets: 0
[1]: rx_mcast_packets: 0
[1]: rx_bcast_packets: 0
[1]: rx_discards: 0
[1]: rx_phy_ip_err_discards: 0
[1]: rx_skb_alloc_discard: 0
[1]: rx_csum_offload_errors: 0
[1]: tx_exhaustion_events: 0
[1]: tx_bytes: 6974
[1]: tx_ucast_packets: 0
[1]: tx_mcast_packets: 41
[1]: tx_bcast_packets: 0
[1]: tpa_aggregations: 0
[1]: tpa_aggregated_frames: 0
[1]: tpa_bytes: 0
[1]: driver_filtered_tx_pkt: 0
[2]: rx_bytes: 0
[2]: rx_ucast_packets: 0
[2]: rx_mcast_packets: 0
[2]: rx_bcast_packets: 0
[2]: rx_discards: 0
[2]: rx_phy_ip_err_discards: 0
[2]: rx_skb_alloc_discard: 0
[2]: rx_csum_offload_errors: 0
[2]: tx_exhaustion_events: 0
[2]: tx_bytes: 0
[2]: tx_ucast_packets: 0
[2]: tx_mcast_packets: 0
[2]: tx_bcast_packets: 0
[2]: tpa_aggregations: 0
[2]: tpa_aggregated_frames: 0
[2]: tpa_bytes: 0
[2]: driver_filtered_tx_pkt: 0
[3]: rx_bytes: 0
[3]: rx_ucast_packets: 0
[3]: rx_mcast_packets: 0
[3]: rx_bcast_packets: 0
[3]: rx_discards: 0
[3]: rx_phy_ip_err_discards: 0
[3]: rx_skb_alloc_discard: 0
[3]: rx_csum_offload_errors: 0
[3]: tx_exhaustion_events: 0
[3]: tx_bytes: 90
[3]: tx_ucast_packets: 0
[3]: tx_mcast_packets: 1
[3]: tx_bcast_packets: 0
[3]: tpa_aggregations: 0
[3]: tpa_aggregated_frames: 0
[3]: tpa_bytes: 0
[3]: driver_filtered_tx_pkt: 0
[4]: rx_bytes: 0
[4]: rx_ucast_packets: 0
[4]: rx_mcast_packets: 0
[4]: rx_bcast_packets: 0
[4]: rx_discards: 0
[4]: rx_phy_ip_err_discards: 0
[4]: rx_skb_alloc_discard: 0
[4]: rx_csum_offload_errors: 0
[4]: tx_exhaustion_events: 0
[4]: tx_bytes: 6486
[4]: tx_ucast_packets: 0
[4]: tx_mcast_packets: 0
[4]: tx_bcast_packets: 141
[4]: tpa_aggregations: 0
[4]: tpa_aggregated_frames: 0
[4]: tpa_bytes: 0
[4]: driver_filtered_tx_pkt: 0
[5]: rx_bytes: 0
[5]: rx_ucast_packets: 0
[5]: rx_mcast_packets: 0
[5]: rx_bcast_packets: 0
[5]: rx_discards: 0
[5]: rx_phy_ip_err_discards: 0
[5]: rx_skb_alloc_discard: 0
[5]: rx_csum_offload_errors: 0
[5]: tx_exhaustion_events: 0
[5]: tx_bytes: 308
[5]: tx_ucast_packets: 0
[5]: tx_mcast_packets: 2
[5]: tx_bcast_packets: 0
[5]: tpa_aggregations: 0
[5]: tpa_aggregated_frames: 0
[5]: tpa_bytes: 0
[5]: driver_filtered_tx_pkt: 0
[6]: rx_bytes: 0
[6]: rx_ucast_packets: 0
[6]: rx_mcast_packets: 0
[6]: rx_bcast_packets: 0
[6]: rx_discards: 0
[6]: rx_phy_ip_err_discards: 0
[6]: rx_skb_alloc_discard: 0
[6]: rx_csum_offload_errors: 0
[6]: tx_exhaustion_events: 0
[6]: tx_bytes: 0
[6]: tx_ucast_packets: 0
[6]: tx_mcast_packets: 0
[6]: tx_bcast_packets: 0
[6]: tpa_aggregations: 0
[6]: tpa_aggregated_frames: 0
[6]: tpa_bytes: 0
[6]: driver_filtered_tx_pkt: 0
[7]: rx_bytes: 0
[7]: rx_ucast_packets: 0
[7]: rx_mcast_packets: 0
[7]: rx_bcast_packets: 0
[7]: rx_discards: 0
[7]: rx_phy_ip_err_discards: 0
[7]: rx_skb_alloc_discard: 0
[7]: rx_csum_offload_errors: 0
[7]: tx_exhaustion_events: 0
[7]: tx_bytes: 0
[7]: tx_ucast_packets: 0
[7]: tx_mcast_packets: 0
[7]: tx_bcast_packets: 0
[7]: tpa_aggregations: 0
[7]: tpa_aggregated_frames: 0
[7]: tpa_bytes: 0
[7]: driver_filtered_tx_pkt: 0
rx_bytes: 46440
rx_error_bytes: 0
rx_ucast_packets: 0
rx_mcast_packets: 169
rx_bcast_packets: 0
rx_crc_errors: 0
rx_align_errors: 0
rx_undersize_packets: 0
rx_oversize_packets: 0
rx_fragments: 0
rx_jabbers: 0
rx_discards: 0
rx_filtered_packets: 0
rx_mf_tag_discard: 0
pfc_frames_received: 0
pfc_frames_sent: 0
rx_brb_discard: 0
rx_brb_truncate: 0
rx_pause_frames: 0
rx_mac_ctrl_frames: 0
rx_constant_pause_events: 0
rx_phy_ip_err_discards: 0
rx_skb_alloc_discard: 0
rx_csum_offload_errors: 0
tx_exhaustion_events: 0
tx_bytes: 13858
tx_error_bytes: 0
tx_ucast_packets: 0
tx_mcast_packets: 44
tx_bcast_packets: 141
tx_mac_errors: 0
tx_carrier_errors: 0
tx_single_collisions: 0
tx_multi_collisions: 0
tx_deferred: 0
tx_excess_collisions: 0
tx_late_collisions: 0
tx_total_collisions: 0
tx_64_byte_packets: 127
tx_65_to_127_byte_packets: 72
tx_128_to_255_byte_packets: 37
tx_256_to_511_byte_packets: 0
tx_512_to_1023_byte_packets: 0
tx_1024_to_1522_byte_packets: 0
tx_1523_to_9022_byte_packets: 0
tx_pause_frames: 0
tpa_aggregations: 0
tpa_aggregated_frames: 0
tpa_bytes: 0
recoverable_errors: 0
unrecoverable_errors: 0
driver_filtered_tx_pkt: 0
Tx LPI entry count: 0
ptp_skipped_tx_tstamp: 0


root@proxmox4:~# pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve)
pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e)
pve-kernel-helper: 6.1-6
pve-kernel-5.3: 6.1-5
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 2.0.1-1+pve8
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-4
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-21
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-6
pve-ha-manager: 3.0-8
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-3
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-6
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 
Here are those same commands from my other server proxmox5
1.
root@proxmox5:~# ethtool -i eno1
driver: bnx2x
version: 1.713.36-0 storm 7.13.11.0
firmware-version: mbi 0.0.0 FFV7.0.47 bc 7.0.49
expansion-rom-version:
bus-info: 0000:01:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes


2.
root@proxmox5:~# ethtool -k eno1
Features for eno1:
rx-checksumming: off
tx-checksumming: off
tx-checksum-ipv4: off
tx-checksum-ip-generic: off [fixed]
tx-checksum-ipv6: off
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
tx-tcp-segmentation: off [requested on]
tx-tcp-ecn-segmentation: off [requested on]
tx-tcp-mangleid-segmentation: off
tx-tcp6-segmentation: off [requested on]
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
rx-vlan-offload: on [fixed]
tx-vlan-offload: on
ntuple-filters: off [fixed]
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: on
tx-gre-csum-segmentation: on
tx-ipxip4-segmentation: on
tx-ipxip6-segmentation: off [fixed]
tx-udp_tnl-segmentation: on
tx-udp_tnl-csum-segmentation: on
tx-gso-partial: on
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: off [fixed]
tx-udp-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: off [fixed]
esp-hw-offload: off [fixed]
esp-tx-csum-hw-offload: off [fixed]
rx-udp_tunnel-port-offload: on
tls-hw-tx-offload: off [fixed]
tls-hw-rx-offload: off [fixed]
rx-gro-hw: off
tls-hw-record: off [fixed]


3.
root@proxmox5:~# ethtool -S eno1
NIC statistics:
[0]: rx_bytes: 21663026079
[0]: rx_ucast_packets: 15516317
[0]: rx_mcast_packets: 1080999
[0]: rx_bcast_packets: 11250
[0]: rx_discards: 2479
[0]: rx_phy_ip_err_discards: 0
[0]: rx_skb_alloc_discard: 0
[0]: rx_csum_offload_errors: 0
[0]: tx_exhaustion_events: 0
[0]: tx_bytes: 58972425214
[0]: tx_ucast_packets: 45506514
[0]: tx_mcast_packets: 368780
[0]: tx_bcast_packets: 620573
[0]: tpa_aggregations: 0
[0]: tpa_aggregated_frames: 0
[0]: tpa_bytes: 0
[0]: driver_filtered_tx_pkt: 0
[1]: rx_bytes: 43183871609
[1]: rx_ucast_packets: 32135959
[1]: rx_mcast_packets: 1210
[1]: rx_bcast_packets: 9093
[1]: rx_discards: 3696
[1]: rx_phy_ip_err_discards: 0
[1]: rx_skb_alloc_discard: 0
[1]: rx_csum_offload_errors: 0
[1]: tx_exhaustion_events: 0
[1]: tx_bytes: 5299850134
[1]: tx_ucast_packets: 6923280
[1]: tx_mcast_packets: 0
[1]: tx_bcast_packets: 0
[1]: tpa_aggregations: 0
[1]: tpa_aggregated_frames: 0
[1]: tpa_bytes: 0
[1]: driver_filtered_tx_pkt: 0
[2]: rx_bytes: 105207502292
[2]: rx_ucast_packets: 73980198
[2]: rx_mcast_packets: 1711
[2]: rx_bcast_packets: 724
[2]: rx_discards: 4967
[2]: rx_phy_ip_err_discards: 0
[2]: rx_skb_alloc_discard: 0
[2]: rx_csum_offload_errors: 0
[2]: tx_exhaustion_events: 0
[2]: tx_bytes: 75251297308
[2]: tx_ucast_packets: 60957020
[2]: tx_mcast_packets: 0
[2]: tx_bcast_packets: 0
[2]: tpa_aggregations: 0
[2]: tpa_aggregated_frames: 0
[2]: tpa_bytes: 0
[2]: driver_filtered_tx_pkt: 0
[3]: rx_bytes: 1783841014
[3]: rx_ucast_packets: 2649313
[3]: rx_mcast_packets: 252
[3]: rx_bcast_packets: 9093
[3]: rx_discards: 0
[3]: rx_phy_ip_err_discards: 0
[3]: rx_skb_alloc_discard: 0
[3]: rx_csum_offload_errors: 0
[3]: tx_exhaustion_events: 0
[3]: tx_bytes: 483128083622
[3]: tx_ucast_packets: 336903824
[3]: tx_mcast_packets: 893
[3]: tx_bcast_packets: 0
[3]: tpa_aggregations: 0
[3]: tpa_aggregated_frames: 0
[3]: tpa_bytes: 0
[3]: driver_filtered_tx_pkt: 0
[4]: rx_bytes: 484658984
[4]: rx_ucast_packets: 1445711
[4]: rx_mcast_packets: 2128
[4]: rx_bcast_packets: 14462
[4]: rx_discards: 0
[4]: rx_phy_ip_err_discards: 0
[4]: rx_skb_alloc_discard: 0
[4]: rx_csum_offload_errors: 0
[4]: tx_exhaustion_events: 0
[4]: tx_bytes: 57721176084
[4]: tx_ucast_packets: 97119734
[4]: tx_mcast_packets: 1
[4]: tx_bcast_packets: 0
[4]: tpa_aggregations: 0
[4]: tpa_aggregated_frames: 0
[4]: tpa_bytes: 0
[4]: driver_filtered_tx_pkt: 0
[5]: rx_bytes: 164905612291
[5]: rx_ucast_packets: 132098168
[5]: rx_mcast_packets: 2546
[5]: rx_bcast_packets: 9091
[5]: rx_discards: 90513
[5]: rx_phy_ip_err_discards: 0
[5]: rx_skb_alloc_discard: 0
[5]: rx_csum_offload_errors: 0
[5]: tx_exhaustion_events: 0
[5]: tx_bytes: 25016444745
[5]: tx_ucast_packets: 20775561
[5]: tx_mcast_packets: 1
[5]: tx_bcast_packets: 0
[5]: tpa_aggregations: 0
[5]: tpa_aggregated_frames: 0
[5]: tpa_bytes: 0
[5]: driver_filtered_tx_pkt: 0
[6]: rx_bytes: 28249296772
[6]: rx_ucast_packets: 75968139
[6]: rx_mcast_packets: 4050
[6]: rx_bcast_packets: 2793
[6]: rx_discards: 0
[6]: rx_phy_ip_err_discards: 0
[6]: rx_skb_alloc_discard: 0
[6]: rx_csum_offload_errors: 0
[6]: tx_exhaustion_events: 0
[6]: tx_bytes: 23093720882
[6]: tx_ucast_packets: 22167421
[6]: tx_mcast_packets: 0
[6]: tx_bcast_packets: 0
[6]: tpa_aggregations: 0
[6]: tpa_aggregated_frames: 0
[6]: tpa_bytes: 0
[6]: driver_filtered_tx_pkt: 0
[7]: rx_bytes: 18371834928
[7]: rx_ucast_packets: 13986105
[7]: rx_mcast_packets: 60259
[7]: rx_bcast_packets: 5522
[7]: rx_discards: 3039
[7]: rx_phy_ip_err_discards: 0
[7]: rx_skb_alloc_discard: 0
[7]: rx_csum_offload_errors: 0
[7]: tx_exhaustion_events: 0
[7]: tx_bytes: 76957514452
[7]: tx_ucast_packets: 55221449
[7]: tx_mcast_packets: 0
[7]: tx_bcast_packets: 0
[7]: tpa_aggregations: 0
[7]: tpa_aggregated_frames: 0
[7]: tpa_bytes: 0
[7]: driver_filtered_tx_pkt: 0
rx_bytes: 395689797943
rx_error_bytes: 11840153974
rx_ucast_packets: 347779910
rx_mcast_packets: 1153155
rx_bcast_packets: 62028
rx_crc_errors: 8879355
rx_align_errors: 0
rx_undersize_packets: 0
rx_oversize_packets: 0
rx_fragments: 75
rx_jabbers: 0
rx_discards: 104694
rx_filtered_packets: 0
rx_mf_tag_discard: 0
pfc_frames_received: 0
pfc_frames_sent: 0
rx_brb_discard: 0
rx_brb_truncate: 0
rx_pause_frames: 0
rx_mac_ctrl_frames: 6
rx_constant_pause_events: 0
rx_phy_ip_err_discards: 0
rx_skb_alloc_discard: 0
rx_csum_offload_errors: 0
tx_exhaustion_events: 0
tx_bytes: 805440512441
tx_error_bytes: 0
tx_ucast_packets: 645574803
tx_mcast_packets: 369675
tx_bcast_packets: 620573
tx_mac_errors: 0
tx_carrier_errors: 0
tx_single_collisions: 0
tx_multi_collisions: 0
tx_deferred: 0
tx_excess_collisions: 0
tx_late_collisions: 0
tx_total_collisions: 0
tx_64_byte_packets: 1706087
tx_65_to_127_byte_packets: 66264480
tx_128_to_255_byte_packets: 52623842
tx_256_to_511_byte_packets: 3400574
tx_512_to_1023_byte_packets: 4188101
tx_1024_to_1522_byte_packets: 518379976
tx_1523_to_9022_byte_packets: 0
tx_pause_frames: 0
tpa_aggregations: 0
tpa_aggregated_frames: 0
tpa_bytes: 0
recoverable_errors: 0
unrecoverable_errors: 0
driver_filtered_tx_pkt: 0
Tx LPI entry count: 0
ptp_skipped_tx_tstamp: 0


4.
root@proxmox5:~# pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve)
pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e)
pve-kernel-helper: 6.1-6
pve-kernel-5.3: 6.1-5
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-4
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-21
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-6
pve-ha-manager: 3.0-8
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-3
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-6
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 
firmware versions are not matching (FFV7.2.20 bc 7.2.25 and FFV7.0.47 bc 7.0.49 in the first command output). can you try updating them to the same/latest version?
 
Hi all,
Thanks for your help. I swapped out the SFP modules with authentic Dells, and also discovered (which is probably the culprit) that the fiber patch cables used with Single-Mode! I replaced with MM fiber and now all is good. I haven't upgraded the firmware for now, since I'm not getting any packet loss.

Sometimes it's important to look at the simple things!
 
great!

you can mark the thread as [SOLVED] so others in the same situation can know what to expect :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!