vzmigrate +" DISK_QUOTA=off" FAILS

Polydeukes

New Member
Apr 1, 2013
2
0
1
Hi everyone!

I've two identical servers with Proxmox 2.2 and a GFS2 shared storage. If I migrate an OpenVZ container with "DISK_QUOTA=on" (in vz.conf config file) it almost takes 1 hour on "initializing remote quota" proccess (150GB of small size files).

Otherwise, if I disable quotas globally (DISK_QUOTA=off in vz.conf) and I try to migrate, vzmigrate will try to dump the container quota file which don't exists (quotas disabled). It seems a bug fixed on vzctl >= 3.0.24 (https://bugzilla.openvz.org/show_bug.cgi?id=1094), but for some reason it still fails.

Here I post some logs and info:

Quotas_on said:
May 01 16:52:10 starting migration of CT 247 to node 'XXXX''
May 01 16:52:10 container is running - using online migration
May 01 16:52:10 container data is on shared storage 'containers'
May 01 16:52:10 start live migration - suspending container
May 01 16:52:11 dump container state
May 01 16:53:49 dump 2nd level quota
May 01 16:53:51 initialize container on remote node 'XXXX'
May 01 16:53:51 initializing remote quota
May 01 17:41:28 turn on remote quota
May 01 17:41:29 load 2nd level quota
May 01 17:41:29 starting container on remote node 'XXXX'
May 01 17:41:29 restore container state
May 01 17:42:31 start final cleanup
May 01 17:42:32 migration finished successfuly (duration 00:50:23)
TASK OK

Quotas_off said:
May 01 14:09:28 starting migration of CT 101 to node 'XXXX'
May 01 14:09:29 container data is on shared storage 'almacen'
May 01 14:09:29 dump 2nd level quota
May 01 14:09:29 # vzdqdump 101 -U -G -T > /almacen/dump/quotadump.101
May 01 14:09:29 ERROR: Failed to dump 2nd level quota: vzquota : (error) Can't open quota file for id 101, maybe you need to reinitialize quota: No such file or directory
May 01 14:09:29 aborting phase 1 - cleanup resources
May 01 14:09:29 start final cleanup
May 01 14:09:29 ERROR: migration aborted (duration 00:00:01): Failed to dump 2nd level quota: vzquota : (error) Can't open quota file for id 101, maybe you need to reinitialize quota: No such file or directory
TASK ERROR: migration aborted

pveversion said:
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-6-pve: 2.6.32-55
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-72
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1

/etc/vz/vz.conf said:
## Global parameters
VIRTUOZZO=yes
LOCKDIR=/var/lib/vz/lock
DUMPDIR=/var/lib/vz/dump
VE0CPUUNITS=1000

## Logging parameters
LOGGING=yes
LOGFILE=/var/log/vzctl.log
LOG_LEVEL=0
VERBOSE=0

## Disk quota parameters
DISK_QUOTA=yes
VZFASTBOOT=no

# Disable module loading. If set, vz initscript does not load any modules.
#MODULES_DISABLED=yes

# The name of the device whose IP address will be used as source IP for CT.
# By default automatically assigned.
#VE_ROUTE_SRC_DEV="eth0"

# Controls which interfaces to send ARP requests and modify ARP tables on.
NEIGHBOUR_DEVS=detect

## Fail if there is another machine in the network with the same IP
ERROR_ON_ARPFAIL="no"

## Template parameters
TEMPLATE=/var/lib/vz/template

## Defaults for containers
VE_ROOT=/var/lib/vz/root/$VEID
VE_PRIVATE=/var/lib/vz/private/$VEID

## Filesystem layout for new CTs: either simfs (default) or ploop
#VE_LAYOUT=ploop

## Load vzwdog module
VZWDOG="no"

## IPv4 iptables kernel modules to be enabled in CTs by default
IPTABLES="ipt_REJECT ipt_tos ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length"
## IPv4 iptables kernel modules to be loaded by init.d/vz script
IPTABLES_MODULES="$IPTABLES"

## Enable IPv6
IPV6="yes"

## IPv6 ip6tables kernel modules
IP6TABLES="ip6_tables ip6table_filter ip6table_mangle ip6t_REJECT"

Any hint would be grateful. Thanks.
 
If I migrate an OpenVZ container with "DISK_QUOTA=on" (in vz.conf config file) it almost takes 1 hour on "initializing remote quota" proccess (150GB of small size files).

What kind of storage do you use - NFS or local disk?
 
Having the same problems , having CTs in NFS share and having already DISK_QUOTA=off in vz.conf and still having the same error like attached.

any solution for this problem ?


vzmigrate.jpg
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!