Gigabit performance.

wetwilly

New Member
Apr 4, 2010
28
0
1
Hello all and happy new year.

We recently increased our bandwidth and now have access to gigabit download and 100Mbit upload speeds.

Somehow we don't seem to be getting as good gigabit performance when routing through Proxmox as we had hoped.

Currently we have one single Intel gigabit nic (eth2, e1000 driver) assigned to vmbr2, connected to our wan.

For the LAN we have a dual port Intel nic configured as a bond (802.3ad eth0+eth1, e1000e driver) connected to gigabit ethernet ports configured as LACP port channels in a cisco switch. The bond is assigned to vmbr10 (proxmox management bridge is also on the bond interface).

This was previously a 100/100Mbit connection routed through a pfSense KVM machine assigned with a ethernet interface each for the wan & lan bridge and the throughput was handled well. The virtual wan interface is configured with a static ip and NAT's the rest of the network from the lan interface.

What we have noticed with the gigabit in place is that it seems to be cpu unit related as the KVM hits constant 100% CPU usage during bandwidth peaks (with the gigabit in place the throughput currently peaks at about 300Mbit.) but assigning more CPU units (from the default of 1000) and more ram didnt show any significant difference...
The host CPU usage doesnt peak though.

Some questions that came into mind, is it better to use the virtio or the e1000 mode for the VM's NIC? Been using the e1000 driver so far.

How much cpu units are reasonable to assign?
vzcpucheck
Current CPU utilization: 4000
Power of the node: 301360

Maybe someone could point us in the correct direction or give any hints on how to improve performance.

Using kernel: 2.6.32-4-pve

Thanks in advance.
 
Hi,
io with kvm need cpu-power (single thread). If you have a cpu with many but not so fast cores, your kvm-io will not so fast as with an cpu with faster but less cores (depends also to others bottlenecks).

The virtio-driver should give an better performance.
What transfer rates do you reach with your NICs? Use iperf between hosts (both networks) and then iperf from VM to host (and outside), to see if the bottleneck is something else.

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!