Problem with migration of OpenVZ CTs

lojasyst

Renowned Member
Apr 10, 2012
12
0
66
Hello

I have a 2 nodes cluster. I can´t migrate neither online nor offline some OpenVZ containers. I have no problem with VMs.

To solve the problem I disabled quotas in /etc/vz/vz.conf with the DISK_QUOTA=no directive and I restarted all CTs but the problem persist.

Should I restart the server to take effect disk quota option?


So, What can I do?

Thanks


I have the following message:

Dec 12 14:56:33 starting migration of CT 120 to node 'proxmox' (192.168.2.4)
Dec 12 14:56:33 starting rsync phase 1
Dec 12 14:56:33 # /usr/bin/rsync -aHAX --delete --numeric-ids --sparse /var/log/disco4/private/120 root@192.168.2.4:/var/log/disco4/private
Dec 12 14:56:34 dump 2nd level quota
Dec 12 14:56:34 # vzdqdump 120 -U -G -T > /var/log/disco4/dump/quotadump.120
Dec 12 14:56:34 ERROR: Failed to dump 2nd level quota: vzquota : (error) Can't open quota file for id 120, maybe you need to reinitialize quota: No such file or directory
Dec 12 14:56:34 aborting phase 1 - cleanup resources
Dec 12 14:56:34 removing copied files on target node
Dec 12 14:56:34 start final cleanup
Dec 12 14:56:34 ERROR: migration aborted (duration 00:00:02): Failed to dump 2nd level quota: vzquota : (error) Can't open quota file for id 120, maybe you need to reinitialize quota: No such file or directory
TASK ERROR: migration aborted

The outpup output of pveversion

root@proxmox:~# pveversion -v
pve-manager: 2.2-31 (pve-manager/2.2/e94e95e9)
running kernel: 2.6.32-11-pve
proxmox-ve-2.6.32: 2.2-82
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-16-pve: 2.6.32-82
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-33
qemu-server: 2.0-69
pve-firmware: 1.0-21
libpve-common-perl: 1.0-39
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.2-7
ksm-control-daemon: 1.1-1



root@proxmox:/etc/init.d# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster name="clusterklon" config_version="2">


<cman keyfile="/var/lib/pve-cluster/corosync.authkey">
</cman>


<clusternodes>
<clusternode name="proxklon" votes="1" nodeid="1"/>
<clusternode name="proxmox" votes="1" nodeid="2"/></clusternodes>


</cluster>
 
...
root@proxmox:~# pveversion -v
pve-manager: 2.2-31 (pve-manager/2.2/e94e95e9)
running kernel: 2.6.32-11-pve
...

you upgraded your system but you still run an the old kernel "2.6.32-11-pve" - seems you did not reboot your host to activate the new kernel.
 
Hi, thanks for your response. I rebooted both nodes.

I have another storage in each node. I tested either with default local or with the other storage, both the same message.

This is my /etc/fstab in one node:

root@proxklon:/var/lib/vz/private# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
UUID=ec7e882f-47a7-4293-9ff4-f556f570a6c3 /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0


root@proxklon:~# pveversion -v
pve-manager: 2.2-31 (pve-manager/2.2/e94e95e9)
running kernel: 2.6.32-16-pve
proxmox-ve-2.6.32: 2.2-82
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-16-pve: 2.6.32-82
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-33
qemu-server: 2.0-69
pve-firmware: 1.0-21
libpve-common-perl: 1.0-39
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.2-7
ksm-control-daemon: 1.1-1
 
Can somebody give me a hint?


I wonder why the warnings about 2nd level quota if i don´t use any quotas in both servers

root@xxxxxx:/etc/vz# cat vz.conf
## Global parameters
VIRTUOZZO=yes
LOCKDIR=/var/lib/vz/lock
DUMPDIR=/var/lib/vz/dump
VE0CPUUNITS=1000


## Logging parameters
LOGGING=no
LOGFILE=/var/log/vzctl.log
LOG_LEVEL=0
VERBOSE=0


## Disk quota parameters
DISK_QUOTA=no
VZFASTBOOT=no


# Disable module loading. If set, vz initscript does not load any modules.
#MODULES_DISABLED=yes


# The name of the device whose IP address will be used as source IP for CT.
# By default automatically assigned.
#VE_ROUTE_SRC_DEV="eth0"


# Controls which interfaces to send ARP requests and modify ARP tables on.
NEIGHBOUR_DEVS=detect


## Fail if there is another machine in the network with the same IP
ERROR_ON_ARPFAIL="no"


## Template parameters
TEMPLATE=/var/lib/vz/template


## Defaults for containers
VE_ROOT=/var/lib/vz/root/$VEID
VE_PRIVATE=/var/lib/vz/private/$VEID


## Filesystem layout for new CTs: either simfs (default) or ploop
#VE_LAYOUT=ploop


## Load vzwdog module
VZWDOG="no"


## IPv4 iptables kernel modules to be enabled in CTs by default
IPTABLES="ipt_REJECT ipt_tos ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length"
## IPv4 iptables kernel modules to be loaded by init.d/vz script
IPTABLES_MODULES="$IPTABLES"


## Enable IPv6
IPV6="yes"


## IPv6 ip6tables kernel modules
IP6TABLES="ip6_tables ip6table_filter ip6table_mangle ip6t_REJECT"
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!