OpenVZ Online Container Migration Fails

rpuglisi

Member
Sep 1, 2011
30
0
6
New Jersey
OpenVZ online container migration still fails under Proxmox 2.0:

Mar 07 14:46:12 starting migration of CT 100 to node 'Proxmox-Test2' (10.50.10.2)
Mar 07 14:46:12 container is running - using online migration
Mar 07 14:46:12 container data is on shared storage 'drbdstr'
Mar 07 14:46:12 start live migration - suspending container
Mar 07 14:46:12 dump container state
Mar 07 14:46:12 dump 2nd level quota
Mar 07 14:46:14 initialize container on remote node 'Proxmox-Test2'
Mar 07 14:46:14 initializing remote quota
Mar 07 14:46:14 # /usr/bin/ssh -c blowfish -o 'BatchMode=yes' root@10.50.10.2 vzctl quotainit 100
Mar 07 14:46:14 vzquota : (error) quota check : stat /data/private/100: Input/output error
Mar 07 14:46:14 ERROR: online migrate failure - Failed to initialize quota: vzquota init failed [1]
Mar 07 14:46:14 start final cleanup
Mar 07 14:46:14 ERROR: migration finished with problems (duration 00:00:03)
TASK ERROR: migration problems

AND ALSO OFFLINE MIGRATION:

Mar 07 14:46:12 starting migration of CT 100 to node 'Proxmox-Test2' (10.50.10.2)
Mar 07 14:46:12 container is running - using online migration
Mar 07 14:46:12 container data is on shared storage 'drbdstr'
Mar 07 14:46:12 start live migration - suspending container
Mar 07 14:46:12 dump container state
Mar 07 14:46:12 dump 2nd level quota
Mar 07 14:46:14 initialize container on remote node 'Proxmox-Test2'
Mar 07 14:46:14 initializing remote quota
Mar 07 14:46:14 # /usr/bin/ssh -c blowfish -o 'BatchMode=yes' root@10.50.10.2 vzctl quotainit 100
Mar 07 14:46:14 vzquota : (error) quota check : stat /data/private/100: Input/output error
Mar 07 14:46:14 ERROR: online migrate failure - Failed to initialize quota: vzquota init failed [1]
Mar 07 14:46:14 start final cleanup
Mar 07 14:46:14 ERROR: migration finished with problems (duration 00:00:03)
TASK ERROR: migration problems

My system is:

pve-manager: 2.0-38 (pve-manager/2.0/af81df02)
running kernel: 2.6.32-7-pve
proxmox-ve-2.6.32: 2.0-60
pve-kernel-2.6.32-7-pve: 2.6.32-60
lvm2: 2.02.88-2pve1
clvm: 2.02.88-2pve1
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-1
pve-cluster: 1.0-23
qemu-server: 2.0-25
pve-firmware: 1.0-15
libpve-common-perl: 1.0-17
libpve-access-control: 1.0-17
libpve-storage-perl: 2.0-12
vncterm: 1.0-2
vzctl: 3.0.30-2pve1
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-5
ksm-control-daemon: 1.1-1

Thanks,
Rich
 
Last edited:
This is how the two systems currently look:

***** THIS IS Proxmox-Test *****

/etc/pve/openvz directory

total 0


/private directory

total 4
drwxr-xr-x 22 root root 4096 Mar 9 11:16 100

***** THIS IS Proxmox-Test2 *****


/etc/pve/openvz directory

total 1
-rw-r----- 1 root www-data 898 Mar 9 11:19 100.conf


/private directory

total 0
 
Sorry, but you can't use ext3 as shared filesystem. What you do is extremely dangerous and will lead to data loss!
 
I setup a LVM volume group, added it into Proxmox storage but the content is only images. I want to create containers.

Ah OK. For container you need a file system. Most people run container on local storage or NFS. I suggest that you use KVM if you want to use DRBD.
 
We don't want to use VMs, we want to use containers.

I installed and configured GFS, defined it in Proxmox storage, created a container and was able to successfully migrated a container online.
 
Hi Valerie,
My configuration is slightly different. I'm using DRBD but these are the basic steps:

CLVM Setup

vi /etc/default/clvm
uncomment START_CLVM=yes

/etc/init.d/clvm start

vgs
pvcreate /dev/drbd0
vgcreate drbdvg /dev/drbd0
lvcreate --size 116G --name drbddata drbdvg


GFS2 Install & Setup

apt-get install gfs2-utils

/etc/init.d/gfs2-cluster start
mkfs.gfs2 -t proxprod:data -j 2 /dev/drbdvg/drbddata
mkdir /data
mount -t gfs2 -o noatime /dev/drbdvg/drbddata /data
mkdir /data/dump

vi /etc/default/redhat-cluster-pve
uncomment FENCE_JOIN=”yes”

/etc/init.d/cman start

Any problems or questions, give me a yell.
Rich
 
thank's! I will try at weekend, and write about results.

now i try on virtual box proxmox vm make gfs2. I have some questions: what are name of cluster i can use ? in your examle there is "proxprod" i use "lso".

that what i do:

create virt hdd. /dev/sdb
root@PVE-N3:/mnt# mkfs.gfs2 -t lso:data -j 2 /dev/sdb
This will destroy any data on /dev/sdb.
It appears to contain: GFS2 Filesystem (blocksize 4096, lockproto lock_dlm)
Are you sure you want to proceed? [y/n]y
Device: /dev/sdb
Blocksize: 4096
Device Size 4.00 GB (1048576 blocks)
Filesystem Size: 4.00 GB (1048575 blocks)
Journals: 2
Resource Groups: 16
Locking Protocol: "lock_dlm"
Lock Table: "lso:data"
UUID: b25f6b0e-84c9-e846-91f2-c10c7caab8f7

try mount and get error

mkdir /mnt/data
root@PVE-N3:/mnt# mount -t gfs2 -o noatime /dev/sdb /mnt/data
fs is for a different cluster
error mounting lockproto lock_dlm

What i do wrong ?
 
Last edited:
I decide this issue. name of cluster from /etc/cluster/cluster.conf ?? right ?
But now new problem :

root@PVE-N3:/mnt# mount -t gfs2 -o noatime /dev/sdb /mnt/data
node not a member of the default fence domain
error mounting lockproto lock_dlm

how add node to fence domain ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!