Poor performance with 10Gbe and open vSwitch

Hi,

I'm trying to achieving a new infrastructure for our data-center, the infrastructure is composed of three nodes equipped with one dual port 10Gbe nic, first i have tested the network with native linux support and got 9.5Gbit/s in the iperf bandwidth test, now when i running the same test with ovs i got 3.5Gbit/s at maximum in the iperf bandwidth test, my ceph performances are now affected by this issue.
I use a ovs bridge connected to my physical switch via a ovs bond and the management network for all the three host is connected to a ovs internal port with tagged vlan.
 
I just finished a kick test now, i configure an IP directly on the bridge without vlan and i got 7.7Gbit/s so the problem seems come from the vlan tagging

which kernel do you use ? I see some similar problems with ovs + kernel 2.6.32 + vlan tag, but it's works fine with 3.10 kernel.
 
which kernel do you use ? I see some similar problems with ovs + kernel 2.6.32 + vlan tag, but it's works fine with 3.10 kernel.

I use the kernel version 2.6.32-29-pve, all is working fine between the hosts after changing MTU to 9000 on all interfaces, i'm currently testing the bandwith between guest and i got 4.75Gbits/s but my system is under heavy load because of Ceph rebuilding a storage.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!