Hello all and happy new year.
We recently increased our bandwidth and now have access to gigabit download and 100Mbit upload speeds.
Somehow we don't seem to be getting as good gigabit performance when routing through Proxmox as we had hoped.
Currently we have one single Intel gigabit nic (eth2, e1000 driver) assigned to vmbr2, connected to our wan.
For the LAN we have a dual port Intel nic configured as a bond (802.3ad eth0+eth1, e1000e driver) connected to gigabit ethernet ports configured as LACP port channels in a cisco switch. The bond is assigned to vmbr10 (proxmox management bridge is also on the bond interface).
This was previously a 100/100Mbit connection routed through a pfSense KVM machine assigned with a ethernet interface each for the wan & lan bridge and the throughput was handled well. The virtual wan interface is configured with a static ip and NAT's the rest of the network from the lan interface.
What we have noticed with the gigabit in place is that it seems to be cpu unit related as the KVM hits constant 100% CPU usage during bandwidth peaks (with the gigabit in place the throughput currently peaks at about 300Mbit.) but assigning more CPU units (from the default of 1000) and more ram didnt show any significant difference...
The host CPU usage doesnt peak though.
Some questions that came into mind, is it better to use the virtio or the e1000 mode for the VM's NIC? Been using the e1000 driver so far.
How much cpu units are reasonable to assign?
vzcpucheck
Current CPU utilization: 4000
Power of the node: 301360
Maybe someone could point us in the correct direction or give any hints on how to improve performance.
Using kernel: 2.6.32-4-pve
Thanks in advance.
We recently increased our bandwidth and now have access to gigabit download and 100Mbit upload speeds.
Somehow we don't seem to be getting as good gigabit performance when routing through Proxmox as we had hoped.
Currently we have one single Intel gigabit nic (eth2, e1000 driver) assigned to vmbr2, connected to our wan.
For the LAN we have a dual port Intel nic configured as a bond (802.3ad eth0+eth1, e1000e driver) connected to gigabit ethernet ports configured as LACP port channels in a cisco switch. The bond is assigned to vmbr10 (proxmox management bridge is also on the bond interface).
This was previously a 100/100Mbit connection routed through a pfSense KVM machine assigned with a ethernet interface each for the wan & lan bridge and the throughput was handled well. The virtual wan interface is configured with a static ip and NAT's the rest of the network from the lan interface.
What we have noticed with the gigabit in place is that it seems to be cpu unit related as the KVM hits constant 100% CPU usage during bandwidth peaks (with the gigabit in place the throughput currently peaks at about 300Mbit.) but assigning more CPU units (from the default of 1000) and more ram didnt show any significant difference...
The host CPU usage doesnt peak though.
Some questions that came into mind, is it better to use the virtio or the e1000 mode for the VM's NIC? Been using the e1000 driver so far.
How much cpu units are reasonable to assign?
vzcpucheck
Current CPU utilization: 4000
Power of the node: 301360
Maybe someone could point us in the correct direction or give any hints on how to improve performance.
Using kernel: 2.6.32-4-pve
Thanks in advance.