Live migration problems OpenVZ

bazzi

Active Member
Jun 4, 2010
107
2
36
Today I got migration problems on 2 CT. I was migrating them out of a node that locks up in the backup to a other node.

The migration fails, but when I starts them on the new node they work flawless.

Code:
[COLOR=#000000][FONT=tahoma]Mar 12 09:56:48 starting migration of CT 102 to node 'timo' (192.168.84.62)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:56:48 container is running - using online migration[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:56:48 starting rsync phase 1[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:56:48 # /usr/bin/rsync -aH --delete --numeric-ids --sparse /var/lib/vz/private/102 root@192.168.84.62:/var/lib/vz/private[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:16 start live migration - suspending container[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:16 dump container state[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:16 copy dump file to target node[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:19 starting rsync (2nd pass)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:19 # /usr/bin/rsync -aH --delete --numeric-ids /var/lib/vz/private/102 root@192.168.84.62:/var/lib/vz/private[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:23 dump 2nd level quota[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:23 copy 2nd level quota to target node[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:25 initialize container on remote node 'timo'[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:25 initializing remote quota[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 turn on remote quota[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 load 2nd level quota[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 starting container on remote node 'timo'[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 restore container state[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 # /usr/bin/ssh -c blowfish -o 'BatchMode=yes' root@192.168.84.62 vzctl restore 102 --undump --dumpfile /var/lib/vz/dump/dump.102 --skip_arpdetect[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Restoring container ...[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Starting container ...[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Container is mounted[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 	undump...[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Setting CPU units: 1000[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Setting CPUs: 2[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Configure veth devices: veth102.0 [/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Adding interface veth102.0 to bridge vmbr0 on CT0 for CT102[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Stopping container ...[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 vzquota : (warning) Quota is running for id 102 already[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Error: undump failed: Cannot allocate memory[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Restoring failed:[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Error: do_rst_vma: sc_m(un)lock failed[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Error: do_rst_mm: failed to restore vma: -12[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Error: do_rst_mm 700448[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Error: rst_mm: -12[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Error: make_baby: -12[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Error: rst_clone_children[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Container was stopped[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 Container start failed[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 ERROR: online migrate failure - Failed to restore container: Can't umount /var/lib/vz/root/102: Device or resource busy[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:26 removing container files on local node[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:31 start final cleanup[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Mar 12 09:59:31 ERROR: migration finished with problems (duration 00:02:43)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]TASK ERROR: migration problems[/FONT][/COLOR]
 
"Cannot allocate memory"

check UBC.
 
They both hadn't problems in the UBC, no failcnt or something like that. I can't reproduce it now, but I will look into that the next time I will get it!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!