Packet loss Within VM network bridge

jacobq11

Active Member
Sep 13, 2018
4
0
41
28
I am having a weird issue with packet loss within one of my network bridges. I installed netdata and it is blowing me up with notification of:

net_packets.eno1
inbound packets dropped ratio = 0.2%
the ratio of inbound dropped packets vs the total number of received packets of the network interface, during the last 10 minutes

I've also ran some iperf3 tests between the VMs. I have my main Internal LAN vmbr, and I have a vmbr just for NFS shares. On my 10.0.0.0/24 network, I get the packet loss as shown by the retries in the test below. My storage vmbr seems to be running just fine. Both have a 1g port out of the server to my network switch.

Does anyone have any insight on how to track this issue down? I've tried remaking the linux bridge, made an OVS bridge, replaced cables, and moved what port the server is in. I am at a complete loss and need this cleared up.

root@hoarder:~# iperf3 -c 10.0.0.6
Connecting to host 10.0.0.6, port 5201
[ 5] local 10.0.0.9 port 43480 connected to 10.0.0.6 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 321 MBytes 2.69 Gbits/sec 109 754 KBytes
[ 5] 1.00-2.00 sec 310 MBytes 2.60 Gbits/sec 0 1024 KBytes
[ 5] 2.00-3.00 sec 319 MBytes 2.67 Gbits/sec 0 1.22 MBytes
[ 5] 3.00-4.00 sec 365 MBytes 3.06 Gbits/sec 102 731 KBytes
[ 5] 4.00-5.00 sec 325 MBytes 2.73 Gbits/sec 0 1012 KBytes
[ 5] 5.00-6.00 sec 325 MBytes 2.73 Gbits/sec 0 1.21 MBytes
[ 5] 6.00-7.00 sec 332 MBytes 2.79 Gbits/sec 0 1.39 MBytes
[ 5] 7.00-8.00 sec 432 MBytes 3.63 Gbits/sec 89 1.24 MBytes
[ 5] 8.00-9.00 sec 402 MBytes 3.38 Gbits/sec 90 1.06 MBytes
[ 5] 9.00-10.00 sec 381 MBytes 3.20 Gbits/sec 0 1.30 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 3.43 GBytes 2.95 Gbits/sec 390 sender
[ 5] 0.00-10.00 sec 3.43 GBytes 2.94 Gbits/sec receiver

iperf Done.
root@hoarder:~# iperf3 -c 172.16.0.1
Connecting to host 172.16.0.1, port 5201
[ 5] local 172.16.0.6 port 57592 connected to 172.16.0.1 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.79 GBytes 15.4 Gbits/sec 0 1.34 MBytes
[ 5] 1.00-2.00 sec 2.31 GBytes 19.8 Gbits/sec 0 1.73 MBytes
[ 5] 2.00-3.00 sec 2.25 GBytes 19.3 Gbits/sec 0 1.73 MBytes
[ 5] 3.00-4.00 sec 2.26 GBytes 19.4 Gbits/sec 0 1.73 MBytes
[ 5] 4.00-5.00 sec 2.32 GBytes 19.9 Gbits/sec 0 1.73 MBytes
[ 5] 5.00-6.00 sec 2.29 GBytes 19.6 Gbits/sec 0 1.73 MBytes
[ 5] 6.00-7.00 sec 2.21 GBytes 19.0 Gbits/sec 0 1.73 MBytes
[ 5] 7.00-8.00 sec 2.28 GBytes 19.6 Gbits/sec 0 1.73 MBytes
[ 5] 8.00-9.00 sec 2.43 GBytes 20.9 Gbits/sec 0 1.73 MBytes
[ 5] 9.00-10.00 sec 2.41 GBytes 20.7 Gbits/sec 0 1.73 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 22.5 GBytes 19.4 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 22.5 GBytes 19.4 Gbits/sec receiver
 
What are the actual loss numbers, can you check/post:
ip -c -details -statistics addr show vmbr0

(may want to replace vmbr0 with other network interfaces too)
 
Here is the output of the command on both bridges, both interfaces, and a VM nic.

I have also tried switching the bridge to use another port as the bridge port.

Bash:
root@hyperbox:~# ip -c -details -statistics addr show vmbr0
30: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 00:25:90:85:50:64 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535
    openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 10.0.0.20/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::9c6c:80ff:fe49:7143/64 scope link
       valid_lft forever preferred_lft forever
    RX: bytes  packets  errors  dropped overrun mcast
    70160064   281472   0       0       0       0
    TX: bytes  packets  errors  dropped carrier collsns
    15424091   37822    0       0       0       0
root@hyperbox:~# ip -c -details -statistics addr show vmbr1
7: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:25:90:85:50:65 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535
    bridge forward_delay 0 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.0:25:90:85:50:65 designated_root 8000.0:25:90:85:50:65 root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer  204.94 vlan_default_pvid 1 vlan_stats_enabled 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 512 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3124 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 172.16.0.3/24 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 fe80::225:90ff:fe85:5065/64 scope link
       valid_lft forever preferred_lft forever
    RX: bytes  packets  errors  dropped overrun mcast
    110602105  1530291  0       0       0       0
    TX: bytes  packets  errors  dropped carrier collsns
    72936386200 1720898  0       0       0       0
root@hyperbox:~# ip -c -details -statistics addr show eno1
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether 00:25:90:85:50:64 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 9216
    openvswitch_slave numtxqueues 8 numrxqueues 8 gso_max_size 65536 gso_max_segs 65535
    inet6 fe80::225:90ff:fe85:5064/64 scope link dadfailed tentative
       valid_lft forever preferred_lft forever
    RX: bytes  packets  errors  dropped overrun mcast
    227423065508 161152307 2       3072    0       267805
    TX: bytes  packets  errors  dropped carrier collsns
    154073310995 134766811 0       0       0       0
root@hyperbox:~# ip -c -details -statistics addr show eno2
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP group default qlen 1000
    link/ether 00:25:90:85:50:65 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 9216
    bridge_slave state forwarding priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.0:25:90:85:50:65 designated_root 8000.0:25:90:85:50:65 hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on neigh_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off numtxqueues 8 numrxqueues 8 gso_max_size 65536 gso_max_segs 65535
    RX: bytes  packets  errors  dropped overrun mcast
    5787337779 4104728  0       4841    0       75562
    TX: bytes  packets  errors  dropped carrier collsns
    47731400   493321   0       0       0       0
root@hyperbox:~# ip -c -details -statistics addr show tap100i0
23: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UNKNOWN group default qlen 1000
    link/ether 82:b0:d6:ab:fa:a1 brd ff:ff:ff:ff:ff:ff promiscuity 2 minmtu 68 maxmtu 65521
    tun type tap pi off vnet_hdr on persist off
    openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    RX: bytes  packets  errors  dropped overrun mcast
    76862806973 58357717 0       0       0       0
    TX: bytes  packets  errors  dropped carrier collsns
    23668372038 20253908 0       214     0       0
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!