change CPU default type?

j_s

Member
Sep 2, 2022
11
2
8
I recently upgraded a linux VM to the latest packages. As a result, Chromium will no longer run because it requires sse3 CPU instructions. I've used the default of kvm64 because "why not? it's the default". Well, I'm realizing that I should change it to something more suitable. From my homework, it seems it should be set to "host". All hosts are identical in all ways and we don't do nested virtualization at all.

So I can easily do a quick cli script to change all of the existing VMs from "(default) kvm64" to "host". However upon further thought, I wanted to change the default to be "host". This would prevent questions and problems in future VMs. The reason is that other people create VMs, and they often use the defaults because "why not? it's the default".

Does anyone know how I could do this? Is there any reasons that I shouldn't do this? I did some Googling and I can't find anyone that's asked this question, so either I'm the only person that is forward-thinking, or there's more nuances to this than I am aware of.

Thanks!
 
I recently upgraded a linux VM to the latest packages. As a result, Chromium will no longer run because it requires sse3 CPU instructions. I've used the default of kvm64 because "why not? it's the default". Well, I'm realizing that I should change it to something more suitable. From my homework, it seems it should be set to "host". All hosts are identical in all ways and we don't do nested virtualization at all.

So I can easily do a quick cli script to change all of the existing VMs from "(default) kvm64" to "host". However upon further thought, I wanted to change the default to be "host". This would prevent questions and problems in future VMs. The reason is that other people create VMs, and they often use the defaults because "why not? it's the default".

Does anyone know how I could do this? Is there any reasons that I shouldn't do this? I did some Googling and I can't find anyone that's asked this question, so either I'm the only person that is forward-thinking, or there's more nuances to this than I am aware of.

Thanks!
You still need kvm64 if you want to use live migrations, even if your nodes are completely identical. Someone of the staff explained that some months ago.
 
Last edited:
You still need kvm64 if you want to use live migrations, even if your nodes are completely identical. Someone of the staff explained that some months ago.
So I read what you just wrote as well and a bunch of people saying that is not true and is nuanced. I did test this with several VMs, and had no problems.

From what I've read the two big reasons to not do "host".

1. Nested virtualization. Apparently this is just not possible with "host" because CPU registers can't be copied. The VM will migrate, but the VM will immediately crash.
2. The source host has CPU instructions that aren't supported on the destination host. Apparently the VM will migrate, but as soon as a not-supported CPU instruction is executed, the VM will crash.

That's why I wrote above:

All hosts are identical in all ways and we don't do nested virtualization at all.

If there is more than those 2 reasons, I'm definitely open ears. But I see kvm64 as something that I should probably avoid if possible. Especially since it doesn't even support sse3. In our case, the vast majority of our VMs use Chromium, and switching away from Chromium is not an option. Some forum posts even mentioned that qemu doesn't even recommend using kvm64 (lol!?).

We've had other unusual performance issues in the past, which I didn't troubleshoot (others tried and gave up/failed). But other people argued that some CPU instruction sets should give much better performance than they did, and questioned the heck out of a lot but never found the problem. I may have stumbled across that problem on accident by not being able to use Chromium in a VM.

For comparison, kvm64 supports this out of the box:

flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx lm constant_tsc nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm cpuid_fault pti


Choosing host gives us this:

flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat umip md_clear arch_capabilities

That's a crapton of flags missing from the virtual CPU. These CPUs are E5-2680v3s, which are circa 2014. Newer CPUs should have even more than this.

So I think I'm covered on this and all "should" work with the CPU type set to hosts. But again, feel free to tell me I'm wrong. I've always used the default because "it worked" until now and I'd rather not figure out I shot myself in the face later. ;)
 
1. Nested virtualization. Apparently this is just not possible with "host" because CPU registers can't be copied. The VM will migrate, but the VM will immediately crash.
2. The source host has CPU instructions that aren't supported on the destination host. Apparently the VM will migrate, but as soon as a not-supported CPU instruction is executed, the VM will crash.
I think it was primarily because of that. Maybe one of the staff got deeper knowledge and knows additional points after the weekend.