[TUTORIAL] Live migration between Intel Xeon and AMD Epyc2 (Linux guests)

Apr 16, 2020
So this topic comes up every now and then, but there seems to be no absolute "yes" or "no" regarding if live migration between Intel and AMD hosts works, or what the options should be. I hope our experience here will help others and spark some discussion.

Due to the unfortunate timing of Intel CPU vulnerabilities becoming an issue for hosting providers, our main service cluster's performance has been steadily getting worse (more load, less reserves available than predicted 2 years ago). Therefore we had to buy new hardware sooner than expected. After taking a long and hard look at current offers on the market, the choice was more or less clear. I am not looking for sparking a flame war regarding the hardware choice, lets just suffice to say that Intel was kind of out of the competition from the start.

We could of course have built a new cluster, but that would have been more hassle than worth at this time. Therefore we decided to add some new AMD servers alongside the old Intel ones and look at getting live migration of Linux VMs (Debian and Ubuntu) working smoothly for the situations when we'd need to migrate VMs across hardware boundaries. The old and new hardware is as follows:

  • 2x Xeon E5-2630 v4 (Broadwell, 10-core)
  • 768GB DDR4 2400MHz
  • 2x EPYC 7502 (Epyc2, 32-core)
  • 1TB DDR4 3200MHz
Proxmox is the newest as of this writing, proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve). Our goal was to optimize the virtual CPUs as much as possible so that we could get the most out of the hardware, not just to find the least amount of flags to enable for live migration to work. Our use case involves (among others) heavy use of SSL and other crypto, meaning we need to expose as much of the CPU feature flags as possible inside the VMs (crypto performance suffers greatly with the default flags). The following list is the intersection of CPU flags available both on AMD and Intel:

3dnowprefetch abm adx aes aperfmperf apic arat avx avx2 bmi1 bmi2 cat_l3 cdp_l3 clflush cmov constant_tsc cpuid cqm cqm_llc cqm_mbm_local cqm_mbm_total cqm_occup_llc cx16 cx8 de f16c fma fpu fsgsbase fxsr ht lahf_lm lm mca mce mmx movbe msr mtrr nonstop_tsc nopl nx pae pat pclmulqdq pdpe1gb pge pni popcnt pse pse36 rdrand rdseed rdt_a rdtscp rep_good sep smap smep sse sse2 sse4_1 sse4_2 ssse3 syscall tsc vme xsave xsaveopt

kvm64 (default CPU selection in Proxmox) enables the following flags in a VM:
apic clflush cmov constant_tsc cpuid cpuid_fault cx16 cx8 de fpu fxsr ht hypervisor lm mca mce mmx msr mtrr nopl nx pae pat pge pni pse pse36 sse sse2 syscall tsc tsc_known_freq vme x2apic xtopology

After some trial and error, the following combination was arrived at, which allowed for live migration of Linux VMs between Intel and AMD, and also avoided some bugs and crashes in the VMs:

args: -cpu kvm64,+3dnowprefetch,+abm,+adx,+aes,+arat,+avx,+avx2,+bmi1,+bmi2,+f16c,+fma,+lahf_lm,+movbe,+pclmulqdq,+popcnt,+rdrand,+rdseed,+rdtscp,+sep,+smap,+smep,+sse4.1,+sse4.2,+ssse3,+xsave,+xsaveopt,+kvm_pv_eoi

This line will need to be added into the VM configuration text file in /etc/pve/nodes/<nodename>/qemu-server/<vmid>.conf. Also, the CPU selection will need to be left empty in the GUI (or the line beginning with "cpu: " removed from the VM configuration). It would be neat to have the option to modify the "args:" -parameter right in the GUI though...

Please note that according to our tests simply leaving the CPU type empty in the GUI (leading to the qemu command line argument of -cpu kvm64,+sep,+lahf_lm,+kvm_pv_unhalt,+kvm_pv_eoi,enforce), while seemingly working at first, will after some load and idle time in the VM result in a crash involving kvm_kick_cpu function somewhere inside of the paravirtualized halt/unhalt code. Linux kernels tested ranged from Debian's 4.9.210-1 to Ubuntu's 5.3.0-46 (and some in between). Therefore the Proxmox default seems to be unsafe and apparently the very minimum working command line probably would be args: -cpu kvm64,+sep,+lahf_lm,+kvm_pv_eoi.

Another consideration is that Intel CPU vulnerability mitigations will be enabled by default in a VM if booted with the default kernel command line options on Intel hardware and DISABLED on AMD hardware. To preserve the mitigations in any case after live migration (while leading to some performance loss on AMD hardware), kernel command line will need to be modified with at least the following: pti=on spectre_v2=retpoline,generic spec_store_bypass_disable=seccomp. Of course one could always define HA groups that do not cross CPU vendor boundaries and live with the additional management of that. It could well be that we will in the end go for this choice, but for now this configuration seems to do what we want it to do. YMMV of course.

Additional notes about some CPU flags:
  • Proxmox default: sep lahf_lm kvm_pv_unhalt kvm_pv_eoi enforce
  • Accelerated maths: abm adx bmi1 bmi2 f16c fma movbe
  • Crypto code: aes pclmulqdq popcnt
  • SSE/AVX: 3dnowprefetch avx avx2 sse4.1 sse4.2 ssse3
  • Hardware random numbers: rdrand rdseed
  • Timers: arat rdtscp
  • Supervisor Mode (will lead to crashes in some VMs if not present): smap smep


Proxmox Staff Member
Staff member
Jan 7, 2016
@Stefan_R has put in a lot of works the last few months to bring a custom CPU model feature to PVE 6.x that will allow defining custom CPU types including flags in a config file, and then use those definitions with individual VMs. this should avoid the need to use -args for overriding CPU flags altogether.
  • Like
Reactions: ylijumala
Apr 16, 2020
@Stefan_R has put in a lot of works the last few months to bring a custom CPU model feature to PVE 6.x that will allow defining custom CPU types including flags in a config file, and then use those definitions with individual VMs. this should avoid the need to use -args for overriding CPU flags altogether.
Sounds good. Please also take a look at the default flag "kvm_pv_unhalt". As I mentioned, it would cause a kernel crash in paravirtualized unhalt code sooner or later in a migrated VM (started on Intel, migrated to AMD).
Nov 6, 2019
Good writing. My situation is a bit more of a gong show in terms of hardware. I have hosts from Dual Xeon X5550s (Dell R710's) and all the way to the newest Epyc nodes. This CPU flag argument "args: -cpu Westmere,+tsc-deadline,-x2apic" allows a reliable live migration between all nodes of the cluster. Interestingly, I've noticed that CPU usage reports in the GUI summary section jumps up like 3 time while using a custom CPU flag line like that (from 0.7-1% to 2.4-4%, Windows guests). I just chose to live with that.
Jan 22, 2021
proxmox Virtual Environment 7.2-11
live migration
Xeon(R) Bronze 3106 <<--->> Xeon(R) CPU E5-2650 v2
kernel 5.15

there was a problem
when migrating Xeon(R) CPU E5-2650 v2 -->> Xeon(R) Bronze 3106 - everything is ok, VM is working
when migrating Xeon(R) Bronze 3106 -->> Xeon(R) CPU E5-2650 v2 - VM freezes, requires full VM shutdown and power on

installed from pve-kernel-5.19 repository
# apt install pve-kernel-5.19
after restarting both PVE servers they are running on pve-kernel-5.19.7-1-pve
and now
when migrating Xeon(R) CPU E5-2650 v2 -->> Xeon(R) Bronze 3106 - everything is ok, VM works and does not freeze
when migrating Xeon(R) Bronze 3106 -->> Xeon(R) CPU E5-2650 v2- everything is ok, VM is running and not freezing


May 20, 2021
Hello everyone,

I'm currently having freezes migrating Linux and Windows VMs between two Xeon HV:
  • Xeon E5-2698 v3 @ 2.30GHz
  • Xeon Silver 4108 @ 1.80GHz
I'm running Proxmox 7.2 with all latest updates installed as of yesterday. Virtual CPU is set as default (kvm64) on every VM and the problem is somehow reproducible. So in case you need some testing let me know and I'm happy to help.

Maybe I should try updating to pve-kernel-5.19 first?

Here's my pveversion --verbose just in case:
proxmox-ve: 7.2-1 (running kernel: 5.15.60-2-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-kernel-helper: 7.2-13
pve-kernel-5.15: 7.2-12
pve-kernel-5.15.60-2-pve: 5.15.60-2
pve-kernel-5.15.60-1-pve: 5.15.60-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph: 16.2.9-pve1
ceph-fuse: 16.2.9-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-3
libpve-guest-common-perl: 4.1-3
libpve-http-server-perl: 4.1-4
libpve-storage-perl: 7.2-10
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.7-1
proxmox-backup-file-restore: 2.2.7-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-6
pve-firmware: 3.5-4
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-3
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1
Last edited:


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!