Layer 3+4 hash policy and incoming traffic

Morphushka

Well-Known Member
Jun 25, 2019
49
7
48
36
Syberia
Hello. Try to explain my problem:
I have bond from 4 Gigabit interfaces (don't ask why they named like this, I don't know):
Code:
uto bond0
iface bond0 inet manual
        slaves rename2 rename3 eno1 rename5
        bond-mode 802.3ad
        bond-miimon 100
        bond_xmit_hash_policy layer3+4

Next: zabbix show "high bandwidth usage >90%" and here is graph:
1585302197985.png

As you can see, this takes almost the entire 1Gbit interface, but for the same time another 3 interfaces are used near 0. All incoming traffic send to proxmox via one link and I can see this in proxmox:
1585302627208.png

Outgoing trafic from proxmox is splited well between 4 interfaces.

The question is can it be problem with hash_police3+4 ? Should I use 2+3 ? Because I read that layer 3+4 policy is not fully LACP or 802.3ad compliant.
May it be some switch incompatibility? Or switch miss configured ?
Thanks!
 
I have bond from 4 Gigabit interfaces (don't ask why they named like this, I don't know):
That's odd and might point to a misconfiguration/driver issue - please check your journal shortly after boot (there should be messages regarding the renaming of interfaces)

As you can see, this takes almost the entire 1Gbit interface, but for the same time another 3 interfaces are used near 0. All incoming traffic send to proxmox via one link and I can see this in proxmox:
What is the incoming traffic? (if everything is from one tcp/udp stream then that's to be expected)! i.e. what does create the 1 gbit traffic?0

The question is can it be problem with hash_police3+4 ? Should I use 2+3 ? Because I read that layer 3+4 policy is not fully LACP or 802.3ad compliant.
May it be some switch incompatibility? Or switch miss configured ?
you could try to repeat the test with 2+3 - and see if it's better distributed across the links.
does the switch log anything of relevance regarding the bond?

I hope this helps!