Hello proxmox community,
we plan to migrate hundred of VMs from our old vmware platform to KVM
proxmox appears as an ideal compromise but I'm experiencing a CPU load problem
On proxmox HOST a single wget download consume 12% single CPU load at 11MBytes/sec (1518 packets size)
On proxmox KVM GUEST the same test consume 25% single vcpu load on guest face and 30% of 2x CPU on host face
On our old vmware the same test consume 4% GUEST single vcpu and 7% 1x CPU on HOST side
On proxmox TOP output return a lot of CPU time consumed in HOST kernel space
GUEST run on debian8 with 2 vcpus and a virtio nic
proxmox HOST is a HP Gen9 2x sockets with a total of 32 cores with 256gb RAM
There are no dropped, replayed packets and no special network queuing stuff
No improvement when increasing receive buffer on host side (backlog)
no improvement when enabling 2 queues on virtio nic
cross tests :
same behavior on debian8 or 9 with qemu-kvm/libvirt
I have benched with Network bridge (nat and L2 fwd), macvtap, not better ...
disabling hardware offload TSO/GSO/checksumming, not better ...
at this level of CPU load I can't believe that this is normal behavior.
In addition the CPU load difference between vmware and KVM intrigues me
I am interested in feedbacks
we plan to migrate hundred of VMs from our old vmware platform to KVM
proxmox appears as an ideal compromise but I'm experiencing a CPU load problem
On proxmox HOST a single wget download consume 12% single CPU load at 11MBytes/sec (1518 packets size)
On proxmox KVM GUEST the same test consume 25% single vcpu load on guest face and 30% of 2x CPU on host face
On our old vmware the same test consume 4% GUEST single vcpu and 7% 1x CPU on HOST side
On proxmox TOP output return a lot of CPU time consumed in HOST kernel space
GUEST run on debian8 with 2 vcpus and a virtio nic
proxmox HOST is a HP Gen9 2x sockets with a total of 32 cores with 256gb RAM
There are no dropped, replayed packets and no special network queuing stuff
No improvement when increasing receive buffer on host side (backlog)
no improvement when enabling 2 queues on virtio nic
cross tests :
same behavior on debian8 or 9 with qemu-kvm/libvirt
I have benched with Network bridge (nat and L2 fwd), macvtap, not better ...
disabling hardware offload TSO/GSO/checksumming, not better ...
at this level of CPU load I can't believe that this is normal behavior.
In addition the CPU load difference between vmware and KVM intrigues me
I am interested in feedbacks