Problem with migration and/or Openvz config with disk_quota

jleg

Member
Nov 24, 2009
105
2
18
Hi,

we just moved a 2-node proxmox 1.9 to a 3-node proxmox 2.1 cluster - and things look really pretty. Currently, we only see an oddity which we do not really understand: migration of containers do not work - unless the container's config manually is modified with the line
DISK_QUOTA=no

Otherwise, migration alwas tries to call "vzdqdump", which fails because of "Can't open quota file". OpenVZ file system is not extn, but xfs; i guess vzquota still does not support xfs.
However, DISK_QUOTA is set to "no" in /etc/vz/vz.conf - which does not seem to help.

In OpenVZMigrate.pm i can see that vzdqdump should only be called when "DISK_QUOTA" is != 0:

Code:
if (!defined($disk_quota) || ($disk_quota != 0)) {
...
        $cmd = "vzdqdump $vmid -U -G -T > " . PVE::Tools::shellquote($self->{quotadumpfile});


But this thing seems to read the container's config - and there is no "DISK_QUOTA" parameter at all, if not added manually.
Of course, GUI says "user quota disabled", seems to relate to "QUOTAUGIDLIMIT"...

Are we doing something wrong, or is this a little flaw in the migration's code?
 
yes, we do not test xfs because quota does not work.

i understand; however, looks more like OpenVZMigrate.pm operating on a non-existing parameter, which imo is "non-clean" anyway ;-)

so question would be - is there a way to get "DISK_QUOTA=no" into $VEID.conf automatically besides "hard editing" API2/OpenVZ.pm?
 
Simply add it to the VEID.conf file?

well, key word was 'automatically'. Having to do this manually would disqualify for productive usage in a team of admins (for us), which then would have to follow a 'soft policy' to manually enter cli to add that line...

Hm, perhaps some cronjob could do/check this, let's see...
 
i found a workaround for this - since we're using vps.mount/umount scripts anyway to activate xfs project quota, those scripts now also simply check for "DISK_QUOTA" being "no" in $VEID.conf. Looks good so far.

Btw, there seems to be a little problem with proxmox creating fresh containers: after the creation, the container's config has these lines:

VE_ROOT="/var/lib/vz/root/$VEID"
VE_PRIVATE="/var/lib/vz/private/107"

looks like $VEID is not expanded?
 
looks like $VEID is not expanded?

see 'man vzctl':

Code:
      --root path
           Sets the path to root directory (VE_ROOT) for  this  container.   This  is
           essentially  a  mount  point for container's root directory.  Argument can
           contain literal string $VEID, which will be substituted with  the  numeric
           CT ID.

But I guess we should expand all, or nothing.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!