Guest VM network not working when host is using balance-rr bonding and 2 active interfaces

I will try and find time to dabble with the above setting on .102, but i'm in quite a rush to get this machine in to use for some tasks. o_O

I can say though, that the 6.2 kernel (6.2.6-1-pve) seems to be working for me without issue, details are in the bugzilla thread.
 
I am seeing the same problem in proxmox 8.0.3:
m:~# pveversion
pve-manager/8.0.3/bbf3993334bfa916 (running kernel: 6.2.16-3-pve)
 
Last edited:
I got the exact same problem with v8.1.10(kernel 6.5.13-3pve)

note:
guest HDCP doesn't work with bonding in RR mode, but works fine in LACP
 
Last edited:
Just trying this out now and getting exactly the same issue :(

Proxmox v8.2.4 (kernel 6.8.8-3-pve)

Purely by chance I am only running 1 x VM each on these nodes (it's a test K3s cluster) so I don't think turning vmbr0 in to a hub matters too much .... but would definitely like to be able to find a better way around the issue :)

Note: The reason I chose balance-rr is purely for speed - AFAIK it's the only bonding type that gives you the maximum speed, e.g. 2 x 2.5Gbit NICs is pushing through around 4-4.2Gbps host-to-host.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!