Is possible to migrate online CTs with two different processors?

lojasyst

Active Member
Apr 10, 2012
12
0
41
Hello,

I have a two nodes cluster
First: Dell PE2950 intel xeon
Second: Desktop intel CI7

When migrating offline no problem.

When I migrate online from Xeon to CI7 OK
When I migrate online from CI7 to Xeon i have the following:

Dec 24 07:06:14 starting migration of CT 190 to node 'pxdell' (192.168.2.4)
Dec 24 07:06:14 container is running - using online migration
Dec 24 07:06:14 starting rsync phase 1
Dec 24 07:06:14 # /usr/bin/rsync -aHAX --delete --numeric-ids --sparse /var/lib/vz/private/190 root@192.168.2.4:/var/lib/vz/private
Dec 24 07:06:22 start live migration - suspending container
Dec 24 07:06:22 dump container state
Dec 24 07:06:22 copy dump file to target node
Dec 24 07:06:22 starting rsync (2nd pass)
Dec 24 07:06:22 # /usr/bin/rsync -aHAX --delete --numeric-ids /var/lib/vz/private/190 root@192.168.2.4:/var/lib/vz/private
Dec 24 07:06:23 dump 2nd level quota
Dec 24 07:06:23 copy 2nd level quota to target node
Dec 24 07:06:24 initialize container on remote node 'pxdell'
Dec 24 07:06:24 initializing remote quota
Dec 24 07:06:25 turn on remote quota
Dec 24 07:06:25 load 2nd level quota
Dec 24 07:06:25 starting container on remote node 'pxdell'
Dec 24 07:06:25 restore container state
Dec 24 07:06:25 # /usr/bin/ssh -o 'BatchMode=yes' root@192.168.2.4 vzctl restore 190 --undump --dumpfile /var/lib/vz/dump/dump.190 --skip_arpdetect
Dec 24 07:06:25 Restoring container ...
Dec 24 07:06:25 Starting container ...
Dec 24 07:06:25 Container is mounted
Dec 24 07:06:25 undump...
Dec 24 07:06:25 Setting CPU units: 1000
Dec 24 07:06:25 Setting CPUs: 1
Dec 24 07:06:25 Configure veth devices: veth190.0
Dec 24 07:06:25 Adding interface veth190.0 to bridge vmbr0 on CT0 for CT190
Dec 24 07:06:25 vzquota : (warning) Quota is running for id 190 already
Dec 24 07:06:25 Error: undump failed: Bad address
Dec 24 07:06:25 Restoring failed:
Dec 24 07:06:25 Error: FPU context can't be restored. The processor is incompatible.
Dec 24 07:06:25 Container is unmounted
Dec 24 07:06:25 ERROR: online migrate failure - Failed to restore container: Container start failed
Dec 24 07:06:25 removing container files on local node
Dec 24 07:06:25 start final cleanup
Dec 24 07:06:26 ERROR: migration finished with problems (duration 00:00:12)
TASK ERROR: migration problems


Both servers have the same latest proxmox version

# pveversion -v
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-71
pve-firmware: 1.0-21
libpve-common-perl: 1.0-40
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1




Can you tell me what is going on?

Thanks
 
I have the exact same error with the latest Proxmox 3.2. Live Migration from the older to the newer cpu works but not vice versa. That is, the migration finishes successfully but the container shuts down and needs to be restarted. There is no kernel panic.

Sep 02 17:30:59 Error: <3>FPU state size unsupported: 832 (current: 512)
Sep 02 17:30:59 Error: FPU context can't be restored. The processor is incompatible.

There is some mentioning of this issue but no solution
https://bugzilla.openvz.org/show_bug.cgi?id=2461
http://forum.openvirtuozzo.org/index.php?t=msg&goto=48796&&srch=migrating#msg_48796

OLD BOX:
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU X5680 @ 3.33GHz


NEW BOX:
cpu family : 6
model : 45
model name : Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz


I wonder if there is a way to disable the newer capabilities on the new CPU makin git more commpatible,


Sep 02 17:27:26 starting migration of CT 100 to node 'lagrange' (192.168.116.191)
Sep 02 17:27:26 container is running - using online migration
Sep 02 17:27:26 starting rsync phase 1
Sep 02 17:27:26 # /usr/bin/rsync -aHAX --delete --numeric-ids --sparse /tank/proxmox/private/100 root@192.168.116.191:/tank/proxmox/private
Sep 02 17:30:48 start live migration - suspending container
Sep 02 17:30:48 dump container state
Sep 02 17:30:49 copy dump file to target node
Sep 02 17:30:55 starting rsync (2nd pass)
Sep 02 17:30:55 # /usr/bin/rsync -aHAX --delete --numeric-ids /tank/proxmox/private/100 root@192.168.116.191:/tank/proxmox/private
Sep 02 17:30:55 dump 2nd level quota
Sep 02 17:30:55 copy 2nd level quota to target node
Sep 02 17:30:57 initialize container on remote node 'lagrange'
Sep 02 17:30:57 initializing remote quota
Sep 02 17:30:57 turn on remote quota
Sep 02 17:30:57 load 2nd level quota
Sep 02 17:30:57 starting container on remote node 'lagrange'
Sep 02 17:30:57 restore container state
Sep 02 17:30:59 # /usr/bin/ssh -o 'BatchMode=yes' root@192.168.116.191 vzctl restore 100 --undump --dumpfile /tank/proxmox/dump/dump.100 --skip_arpdetect
Sep 02 17:30:57 Restoring container ...
Sep 02 17:30:57 Starting container ...
Sep 02 17:30:57 Container is mounted
Sep 02 17:30:57 undump...
Sep 02 17:30:57 Adding IP address(es): 192.168.116.13
Sep 02 17:30:57 Setting CPU units: 1000
Sep 02 17:30:57 Setting CPUs: 2
Sep 02 17:30:59 vzquota : (warning) Quota is running for id 100 already
Sep 02 17:30:59 Error: undump failed: Bad address
Sep 02 17:30:59 Restoring failed:
Sep 02 17:30:59 Error: <3>FPU state size unsupported: 832 (current: 512)
Sep 02 17:30:59 Error: FPU context can't be restored. The processor is incompatible.
Sep 02 17:30:59 Container start failed
Sep 02 17:30:59 ERROR: online migrate failure - Failed to restore container: Can't umount /var/lib/vz/root/100: Device or resource busy
Sep 02 17:30:59 removing container files on local node
Sep 02 17:31:00 start final cleanup
Sep 02 17:31:00 ERROR: migration finished with problems (duration 00:03:34)
TASK ERROR: migration problems
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!