High CPU usage by KVM

digidax

Renowned Member
Mar 23, 2009
99
1
73
Hello,
Only one single VM, running with CentOS4 minimal produces 25% CPU on a node.
The node's storage for the VM based on ZFS Raid 10, System is i7-5960X / 16 cores.

Code:
top - 11:35:33 up 1 day,  2:06,  3 users,  load average: 0.33, 0.17, 0.11
Tasks: 383 total,   1 running, 382 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.1 us,  0.0 sy,  0.0 ni, 99.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :  11847.6 total,   8417.3 free,   3088.2 used,    342.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   8389.2 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                   
 7833 root      20   0 3117900 317284   6448 S  23.9   2.6  23:56.66 kvm

Process Info:
Code:
7833 ?        Sl    18:25 /usr/bin/kvm -id 182 -name 182.internal.com -chardev socket,id=qmp,path=/var/run/qemu-server/182.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/rureconnect=5 -mon chardev=qmp-event,mode=control -pidfile /var/run/qemu-server/182.pid -daemonize -smbios type=1,uuid=3a1980ee-ab38-4d8b-9fd7-9cdba4d69fe5 -smp 8,sockets=1,cores=8,maxcpus=8 -nodefaults -boot menu=on,strictut=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vnc unix:/var/run/qemu-server/182.vnc,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 2048 -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,a pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device vmgenid,guid=ed19ca9c-7318-4a55-8dc8-c6815b5e9325 -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device VGA,id=vga,bus=pci.0,addr=0x2 -device virtio-baloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:332f924ead6 -drive if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -device ahciunction=on,bus=pci.0,addr=0x7 -drive file=/dev/zvol/rpool/data/vm-182-disk-0,if=none,id=drive-sata0,format=raw,cache=none,aio=native,detect-zeroes=on -device ide-hd,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=100 -ne=net0,ifname=tap182i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown -device e1000,mac=1A:80:4B:B2:C3:83,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -machine type=pc+pve1

Output of "perf top":
1582886650820.png

Configuration of the VM:
1582886293459.png

1582886355046.png

Inside of the VM:
Code:
[root@db4ma ~]# top
top - 10:39:37 up  1:30,  1 user,  load average: 0.00, 0.00, 0.00
Tasks:  94 total,   1 running,  93 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0% us,  0.0% sy,  0.0% ni, 99.9% id,  0.0% wa,  0.0% hi,  0.0% si
Mem:   2056104k total,   183308k used,  1872796k free,    11404k buffers
Swap:        0k total,        0k used,        0k free,   108208k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
    1 root      16   0  4780  556  460 S  0.0  0.0   0:00.71 init
    2 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/0
    3 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/0
    4 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/1
    5 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/1
    6 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/2
    7 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/2
    8 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/3
    9 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/3
   10 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/4
   11 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/4
   12 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/5
   13 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/5
   14 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/6
   15 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/6
   16 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/7
   17 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/7
   18 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 events/0
   19 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 events/1
   20 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 events/2
   21 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 events/3
   22 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 events/4
   23 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 events/5
   24 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 events/6
   25 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 events/7
   26 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 khelper
   27 root       6 -10     0    0    0 S  0.0  0.0   0:00.00 kthread
   28 root      15 -10     0    0    0 S  0.0  0.0   0:00.00 kacpid
   43 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/0
   44 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/1
   45 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/2
   46 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/3
   47 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/4
   48 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/5
   49 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/6
   50 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/7
   51 root      15   0     0    0    0 S  0.0  0.0   0:00.00 khubd
  102 root      20   0     0    0    0 S  0.0  0.0   0:00.00 pdflush
  103 root      15   0     0    0    0 S  0.0  0.0   0:00.00 pdflush
  104 root      25   0     0    0    0 S  0.0  0.0   0:00.00 kswapd0
  105 root      12 -10     0    0    0 S  0.0  0.0   0:00.00 aio/0
  106 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 aio/1

Versions:
Code:
proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve)
pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e)
pve-kernel-5.3: 6.1-5
pve-kernel-helper: 6.1-5
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: not correctly installed
ifupdown2: 2.0.1-1+pve4
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.14-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-12
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-4
libpve-storage-perl: 6.1-4
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-19
pve-docs: 6.1-6
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-10
pve-firmware: 3.0-5
pve-ha-manager: 3.0-8
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-3
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-6
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

Is this normal, that KVM uses 25% of CPU ?
 

Attachments

  • 1582886326396.png
    1582886326396.png
    20.7 KB · Views: 28
Hello,
Only one single VM, running with CentOS4 minimal produces 25% CPU on a node.
The node's storage for the VM based on ZFS Raid 10, System is i7-5960X / 16 cores.

Code:
top - 11:35:33 up 1 day,  2:06,  3 users,  load average: 0.33, 0.17, 0.11
Tasks: 383 total,   1 running, 382 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.1 us,  0.0 sy,  0.0 ni, 99.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :  11847.6 total,   8417.3 free,   3088.2 used,    342.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   8389.2 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                  
7833 root      20   0 3117900 317284   6448 S  23.9   2.6  23:56.66 kvm

Process Info:
Code:
7833 ?        Sl    18:25 /usr/bin/kvm -id 182 -name 182.internal.com -chardev socket,id=qmp,path=/var/run/qemu-server/182.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/rureconnect=5 -mon chardev=qmp-event,mode=control -pidfile /var/run/qemu-server/182.pid -daemonize -smbios type=1,uuid=3a1980ee-ab38-4d8b-9fd7-9cdba4d69fe5 -smp 8,sockets=1,cores=8,maxcpus=8 -nodefaults -boot menu=on,strictut=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vnc unix:/var/run/qemu-server/182.vnc,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 2048 -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,a pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device vmgenid,guid=ed19ca9c-7318-4a55-8dc8-c6815b5e9325 -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device VGA,id=vga,bus=pci.0,addr=0x2 -device virtio-baloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:332f924ead6 -drive if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -device ahciunction=on,bus=pci.0,addr=0x7 -drive file=/dev/zvol/rpool/data/vm-182-disk-0,if=none,id=drive-sata0,format=raw,cache=none,aio=native,detect-zeroes=on -device ide-hd,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=100 -ne=net0,ifname=tap182i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown -device e1000,mac=1A:80:4B:B2:C3:83,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -machine type=pc+pve1

Output of "perf top":
View attachment 15265

Configuration of the VM:
View attachment 15262

View attachment 15264

Inside of the VM:
Code:
[root@db4ma ~]# top
top - 10:39:37 up  1:30,  1 user,  load average: 0.00, 0.00, 0.00
Tasks:  94 total,   1 running,  93 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0% us,  0.0% sy,  0.0% ni, 99.9% id,  0.0% wa,  0.0% hi,  0.0% si
Mem:   2056104k total,   183308k used,  1872796k free,    11404k buffers
Swap:        0k total,        0k used,        0k free,   108208k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
    1 root      16   0  4780  556  460 S  0.0  0.0   0:00.71 init
    2 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/0
    3 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/0
    4 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/1
    5 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/1
    6 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/2
    7 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/2
    8 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/3
    9 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/3
   10 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/4
   11 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/4
   12 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/5
   13 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/5
   14 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/6
   15 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/6
   16 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/7
   17 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/7
   18 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 events/0
   19 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 events/1
   20 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 events/2
   21 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 events/3
   22 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 events/4
   23 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 events/5
   24 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 events/6
   25 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 events/7
   26 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 khelper
   27 root       6 -10     0    0    0 S  0.0  0.0   0:00.00 kthread
   28 root      15 -10     0    0    0 S  0.0  0.0   0:00.00 kacpid
   43 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/0
   44 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/1
   45 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/2
   46 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/3
   47 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/4
   48 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/5
   49 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/6
   50 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/7
   51 root      15   0     0    0    0 S  0.0  0.0   0:00.00 khubd
  102 root      20   0     0    0    0 S  0.0  0.0   0:00.00 pdflush
  103 root      15   0     0    0    0 S  0.0  0.0   0:00.00 pdflush
  104 root      25   0     0    0    0 S  0.0  0.0   0:00.00 kswapd0
  105 root      12 -10     0    0    0 S  0.0  0.0   0:00.00 aio/0
  106 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 aio/1

Versions:
Code:
proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve)
pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e)
pve-kernel-5.3: 6.1-5
pve-kernel-helper: 6.1-5
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: not correctly installed
ifupdown2: 2.0.1-1+pve4
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.14-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-12
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-4
libpve-storage-perl: 6.1-4
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-19
pve-docs: 6.1-6
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-10
pve-firmware: 3.0-5
pve-ha-manager: 3.0-8
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-3
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-6
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

Is this normal, that KVM uses 25% of CPU ?
No, it is not. But especially in case of CentOS I noticed already that graphic hardware emulation consumes a lot of CPU resources. Try to run the VM without GUI in order to observe the difference.
If it's independent from graphic try another VM installation and check if the same phenomenon occurs.
 
Thers is no GUI / X server running. What kind of "other VM installation" do you mean please?
 
According to intel ark your processor has 8Cores, not 16!
https://ark.intel.com/content/www/u...extreme-edition-20m-cache-up-to-3-50-ghz.html

Hyperthreading does not count as a real core. It's fake!

So technically your VM is likely blocking partially the Hypervisor away. This will create contention on the CPU, create IO waits and mess the whole system up.
Configure the VM with appropriate amount of CPU (start with 1, eventually 2, but not 8!) and I would expect this situation go away.

For further details have a read here:
https://forum.proxmox.com/threads/cpu-cores-and-threads.65923/
 
  • Like
Reactions: digidax
Thanks, PVE GUI seems then to calculate the hyperthreading in the output of cores, :)
1583387795930.png
At the moment, there are now two identical VMs, each assignet with 2 CPU cores. When the database inside the VMs is under production load, I will do benchmark the query performance and compare it when only 1 core is assigned to the VM.

Thanks for your help tburger.
 
cpu on host is always using some cpu for interrupts (clocks, mouse, ...) and network/disk io pooling. (also you are using an e1000 interface, it's using also more cpu in kvm process, as with virtio you have vhost-net running in kernel outside the kvm process)
 
Now the battle: 179 is running with virtio network driver, 182 with e1000:
1583391807335.png
It's ok for me, will check the mysql query time inside the vm to decide, which network driver I will use.
 
PVE GUI seems then to calculate the hyperthreading in the output of cores
The UI does not differentiate between logical and physical cores. So for logical cores (threads) the display is correct.
Bottom line: the UI does not free you from using your own mind ;)

Thanks for your help tburger.
You are welcome.

Now the battle: 179 is running with virtio network driver, 182 with e1000:
No real battle imho. Use virtio. All modern Linux distros have the drivers built into the Kernel. In contrast to Windows this is really a no-brainer.
 
  • Like
Reactions: digidax
Memory ballooning helps reclaim memory that "isn't used" by forcing the OS to swap out and re-swap in.
So I doubt that this is beneficial to your MySQL DB... In contrast - it might be / is contra-productive.
You should consider disabling it if it is no use to you
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!