CPU type host vs. kvm64

brucexx

Renowned Member
Mar 19, 2015
239
9
83
I have all nodes with exactly the same CPUs model , core count etc. In general is there significant increase in CPU performance in the host type vs. default kvm64 ?

I have all VMs set to kvm64 but I was reading some Proxmox documentation and it says: "If you don’t care about live migration or have a homogeneous cluster where all nodes have the same CPU, set the CPU type to host, as in theory this will give your guests maximum performance."

Thank you
 
So yes, using CPU type "host" will increase the performance of your VMS.
and decrease the portability if you run the VM in a cluster. Switchover is not always possible and can result in crashing the guest after migration. This is the tradeoff you have to consider.
 
and decrease the portability if you run the VM in a cluster. Switchover is not always possible and can result in crashing the guest after migration. This is the tradeoff you have to consider.
Not in this case as the OP stated that "I have all nodes with exactly the same CPUs model", so live migration among nodes won't be an issue ;)
 
Not in this case as the OP stated that "I have all nodes with exactly the same CPUs model", so live migration among nodes won't be an issue ;)
Unfortunately, that's not true. The register status is not synchronized and this is also stated in the document you linked:

Live migration is unsafe when this mode is used as libvirt / QEMU cannot guarantee a stable CPU is exposed to the guest across hosts. This is the recommended CPU to use, provided live migration is not required.

Try migrating a VM that has virtualization exposed and runs a nested hypervisor. It'll just crash after migrating.
 
I will test that today. LnxBil I see that in the link VictorSTS provided and I was confused by this as well.

Proxmox documentation states:
"In short, if you care about live migration and moving VMs between nodes, leave the kvm64 default. If you don’t care about live migration or have a homogeneous cluster where all nodes have the same CPU, set the CPU type to host, as in theory this will give your guests maximum performance."

Perhaps this sentence is misleading or gives impression that safe live migration can be done. I understand it as if I have all CPUs the same across the cluster I would be able to do safely live migration. Fortunately I am not running nested hypervisors on it. I will update live migration tests.

Thank you
 
Thank you for the information. Maybe there are different statements (older and newer ones) in the wild. Here it is stated, that migration with nested virtualization is not possible, but I just did it with a Hyper-V hypervisor. Maybe it has changed in PVE 7 and not all documentation has been updated?
 
I just live migrated back and forth one of the systems across 4 nodes several times. No issues with the same CPU model. The system is stable and operational after 6 live migrations have host type CPU configured.
 
Thank you for the link. I wonder when this changed , the default on Proxmox is still KVM64 on 7.3-4 - perhaps is it is the most compatible one. I started to use host as our CPUs in the cluster are identical - I haven't had any issues. I am curious what's the best compromise between the compatibility and performance/feature set.
 
@brucexx I'm not sure anymore if what they say in the link is correct though. I also had a look at the actual CPU instruction sets and there were around 2 more flags for kvm64 than for qemu64. So we need more context. So I wouldn't suggest anyone switching to qemu64 at this point :)
Here are the differences on a random server:

Host cpu:
fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology eagerfpu pni pclmulqdq vmx ssse3 cx16 pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm rsb_ctxsw tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust smep erms xsaveopt arat umip arch_capabilities
Qemu64 cpu:
fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl xtopology eagerfpu pni cx16 x2apic hypervisor lahf_lm rsb_ctxsw
kvm64 cpu:
fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl xtopology eagerfpu pni cx16 x2apic hypervisor lahf_lm rsb_ctxsw
 
Last edited:
I was stumbling across your blogpost, because I hade problems with my Nextcloud AIO Instance running on default kvm64:
1698738401419.png

It was taking 101% CPU ressource of all cores and the webUI was barely accessible still.

When I switched to "x86-64-v2-AES" it was an amazing performance imporvement. Now the CPU is almost down to idle most of the time:
1698738527990.png

and the WebUI reacts like in good old nextcloud days :)

I assume the AES feature added in the latest release brought the improvement (at least in this case).
 
  • Like
Reactions: csplus
Changing Processor Type from "x86-64-v2-AES" to "host" (Xeon W-2155) made huge difference in disk write speed with ZFS native encryption in my setup:

"x86-64-v2-AES" write speed is 30MB/s
"host" write speed is 500MB/s​
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!