Help! Migration problems on cluster

VulcanRidr

Renowned Member
May 21, 2010
27
0
66
I'm having problems migrating containers between nodes. I currently have three nodes, gryphon and fearless, which are running 3.0, and akagi, which is running 2.3. At first, I thought it was a problem migrating from a 3.0 node to 2.3, but I added the 3rd node today, and I tried migrating from fearless (3.0) to gryphon (3.0), which failed:

Code:
Aug 11 13:43:13 starting migration of CT 107 to node 'gryphon' (192.168.224.14)
Aug 11 13:43:13 container data is on shared storage 'local'
Aug 11 13:43:13 dump 2nd level quota
Aug 11 13:43:13 initialize container on remote node 'gryphon'
Aug 11 13:43:13 initializing remote quota
Aug 11 13:43:13 # /usr/bin/ssh -o 'BatchMode=yes' root@192.168.2.14 vzctl quotainit 107
Aug 11 13:43:13 vzquota : (error) quota check : stat /var/lib/vz/private/107: No such file or directory
Aug 11 13:43:13 ERROR: Failed to initialize quota: vzquota init failed [1]
Aug 11 13:43:13 start final cleanup
Aug 11 13:43:13 ERROR: migration finished with problems (duration 00:00:01)
TASK ERROR: migration problems

I have also tried fearless -> akagi (2.3), which failed in the same manner. In all cases, the config file gets moved from /etc/pve/<oldnode>/openvz to /etc/pve/<newnode>/openvz. But none of the filesystem gets moved.

I've tried both live migrations and on shut down containers, and both fail.

What am I missing? I was able to freely migrate containers between nodes in 2.3.

Thanks,
--vr
 
Last edited:
I'm having problems migrating containers between nodes. I currently have three nodes, gryphon and fearless, which are running 3.0, and akagi, which is running 2.3. At first, I thought it was a problem migrating from a 3.0 node to 2.3, but I added the 3rd node today, and I tried migrating from fearless (3.0) to gryphon (3.0), which failed:

Code:
Aug 11 13:43:13 starting migration of CT 107 to node 'gryphon' (192.168.224.14)
Aug 11 13:43:13 container data is on shared storage 'local'
Aug 11 13:43:13 dump 2nd level quota
Aug 11 13:43:13 initialize container on remote node 'gryphon'
Aug 11 13:43:13 initializing remote quota
Aug 11 13:43:13 # /usr/bin/ssh -o 'BatchMode=yes' root@192.168.2.14 vzctl quotainit 107
Aug 11 13:43:13 vzquota : (error) quota check : stat /var/lib/vz/private/107: No such file or directory
Aug 11 13:43:13 ERROR: Failed to initialize quota: vzquota init failed [1]
Aug 11 13:43:13 start final cleanup
Aug 11 13:43:13 ERROR: migration finished with problems (duration 00:00:01)
TASK ERROR: migration problems

I have also tried fearless -> akagi (2.3), which failed in the same manner. In all cases, the config file gets moved from /etc/pve/<oldnode>/openvz to /etc/pve/<newnode>/openvz. But none of the filesystem gets moved.

I've tried both live migrations and on shut down containers, and both fail.

What am I missing? I was able to freely migrate containers between nodes in 2.3.

Thanks,
--vr
Hi,
what kind of filesystem do you use on the target node?
Can you provide the following output from the target node?
Code:
mount
df -h
Udo
 
Hi,
what kind of filesystem do you use on the target node?
Can you provide the following output from the target node?
Code:
mount
df -h
Udo

It is apparently ext3:

Code:
[root@gryphon ~]# df -Th
Filesystem               Type      Size  Used Avail Use% Mounted on
udev                     devtmpfs   10M     0   10M   0% /dev
tmpfs                    tmpfs     1.5G  372K  1.5G   1% /run
[b]/dev/mapper/pve-root     ext3       34G  1.2G   31G   4% /[/b]
tmpfs                    tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs                    tmpfs     3.0G   47M  2.9G   2% /run/shm
[b]/dev/mapper/pve-data     ext3       70G  180M   70G   1% /var/lib/vz[/b]
/dev/sda1                ext3      495M   34M  436M   8% /boot
/dev/fuse                fuse       30M   44K   30M   1% /etc/pve

This is a fresh install of proxmox-ve. I did notice that each of the vms on the other 3.0 node has it's own simfs partition, so in addition to the partitions listed above, I also have (for each container):

Code:
root@fearless 107]# df -Th
/var/lib/vz/private/106  simfs      12G   11G  1.4G  89% /var/lib/vz/root/106
tmpfs                    tmpfs     1.0G     0  1.0G   0% /var/lib/vz/root/106/lib/init/rw
tmpfs                    tmpfs     1.0G     0  1.0G   0% /var/lib/vz/root/106/dev/shm

Thanks,
--vr
 
It is apparently ext3:

Code:
[root@gryphon ~]# df -Th
Filesystem               Type      Size  Used Avail Use% Mounted on
udev                     devtmpfs   10M     0   10M   0% /dev
tmpfs                    tmpfs     1.5G  372K  1.5G   1% /run
[B]/dev/mapper/pve-root     ext3       34G  1.2G   31G   4% /[/B]
tmpfs                    tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs                    tmpfs     3.0G   47M  2.9G   2% /run/shm
[B]/dev/mapper/pve-data     ext3       70G  180M   70G   1% /var/lib/vz[/B]
/dev/sda1                ext3      495M   34M  436M   8% /boot
/dev/fuse                fuse       30M   44K   30M   1% /etc/pve

This is a fresh install of proxmox-ve. I did notice that each of the vms on the other 3.0 node has it's own simfs partition, so in addition to the partitions listed above, I also have (for each container):

Code:
root@fearless 107]# df -Th
/var/lib/vz/private/106  simfs      12G   11G  1.4G  89% /var/lib/vz/root/106
tmpfs                    tmpfs     1.0G     0  1.0G   0% /var/lib/vz/root/106/lib/init/rw
tmpfs                    tmpfs     1.0G     0  1.0G   0% /var/lib/vz/root/106/dev/shm

Thanks,
--vr
Strange,
I guess that's happens due to mixed cluster (on my 3.0-cluster work migration well).
Is the Node, where the CT run on 3.0?
Is the node up to date?
Do you connect to the gui also on a 3.0-node or on the 2.3-node?

What shows
Code:
grep VE_PRIVATE /etc/pve/openvz/107.conf
on the Node with the CT 107?

Udo
 
Strange,
I guess that's happens due to mixed cluster (on my 3.0-cluster work migration well).

My goal is to migrate everything off of the remaining 2.3 node. The 3.0 nodes are new (to me) hardware. The CTs on fearless came off of hornet, the other 2.3 node, and there was no problem migrating them. Right now, I am trying to migrate off of the 2.3

Is the Node, where the CT run on 3.0?

It is. Though I have also tried to migrate a CT on the 2.3 node, with the same results. The conf file gets moved to the new node, but the only files under /var/lib/vz that get moved is /var/lib/vz/107/fastboot. At that point, the migration fails.

Is the node up to date?

All are up to date.

Do you connect to the gui also on a 3.0-node or on the 2.3-node?

I've connected and tried on both. All three nodes are able to connect to the GUI.

What shows
Code:
grep VE_PRIVATE /etc/pve/openvz/107.conf
on the Node with the CT 107?

Udo

Code:
[root@fearless 107]# grep VE_PRIVATE /etc/pve/openvz/107.conf
VE_PRIVATE="/var/lib/vz/private/107"
 
Aug 11 13:43:13 starting migration of CT 107 to node 'gryphon' (192.168.224.14)
Aug 11 13:43:13 container data is on shared storage 'local'

Someone (probably you) marked the local storage as 'shared'!

Local storage is usually not shared, so please remove the shared flag from the local storage and try again.
 
Thanks to both of you for your help. Dietmar, that seems to have done it. I am currently migrating CT107, and it appears to be moving the file perfectly. Udo, thank you for your suggestions and for sticking with me. Don't be too hard on yourself. I was staring at the same message for longer time and didn't see it. :)

I am relatively sure I did not change that setting, considering I had to hunt around how to change it back.

Thanks again,
--vr
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!