Backup vzdump failed

diegox80

Member
May 24, 2011
2
0
21
Hi,
this is the log

Jun 11 19:38:01 INFO: Starting Backup of VM 101 (qemu)
Jun 11 19:38:01 INFO: running
Jun 11 19:38:01 INFO: status = running
Jun 11 19:38:02 INFO: backup mode: snapshot
Jun 11 19:38:02 INFO: ionice priority: 7
Jun 11 19:38:02 INFO: Insufficient free extents (0) in volume group pve1: 256 required
Jun 11 19:38:02 ERROR: Backup of VM 101 failed - command 'lvcreate --size 1024M --snapshot --name 'vzsnap-nodo1-0' '/dev/pve1/data'' failed with exit code 5

I have to resize Logical Volume?
Tnx
 
looks like you configured a custom VG (pve1) but you did not got enough free space for snapshots in this volume group.
 
Hi all,
i think i have a similar problem.
Lately when i want to backup my kvm machines i get this:

Jun 10 07:00:02 INFO: Starting Backup of VM 305 (openvz)
Jun 10 07:00:02 INFO: CTID 305 exist mounted running
Jun 10 07:00:02 INFO: status = CTID 305 exist mounted running
Jun 10 07:00:02 INFO: mode failure - unable to dump into snapshot (use option --dumpdir)
Jun 10 07:00:02 INFO: trying 'suspend' mode instead
Jun 10 07:00:02 INFO: backup mode: suspend
Jun 10 07:00:02 INFO: ionice priority: 7
Jun 10 07:00:02 INFO: starting first sync /var/lib/vz/private/305/ to /var/lib/vz/backups/taeglich/vzdump-openvz-305-2011_06_10-07_00_02.tmp
Jun 10 07:05:09 INFO: Number of files: 88146
Jun 10 07:05:09 INFO: Number of files transferred: 76636
Jun 10 07:05:09 INFO: Total file size: 1710323608 bytes
Jun 10 07:05:09 INFO: Total transferred file size: 1598475239 bytes
Jun 10 07:05:09 INFO: Literal data: 1598475239 bytes
Jun 10 07:05:09 INFO: Matched data: 0 bytes
Jun 10 07:05:09 INFO: File list size: 2027390
Jun 10 07:05:09 INFO: File list generation time: 0.001 seconds
Jun 10 07:05:09 INFO: File list transfer time: 0.000 seconds
Jun 10 07:05:09 INFO: Total bytes sent: 1604011894
Jun 10 07:05:09 INFO: Total bytes received: 1517524
Jun 10 07:05:09 INFO: sent 1604011894 bytes received 1517524 bytes 5221233.88 bytes/sec
Jun 10 07:05:09 INFO: total size is 1710323608 speedup is 1.07
Jun 10 07:05:09 INFO: first sync finished (307 seconds)
Jun 10 07:05:09 INFO: suspend vm
Jun 10 07:05:09 INFO: Setting up checkpoint...
Jun 10 07:05:09 INFO: suspend...
Jun 10 07:05:19 INFO: Can not suspend container: Interrupted system call
Jun 10 07:05:19 INFO: Error: interrupted or timed out.
Jun 10 07:05:19 INFO: Checkpointing failed
Jun 10 07:06:06 ERROR: Backup of VM 305 failed - command 'vzctl --skiplock chkpnt 305 --suspend' failed with exit code 16


I don't think it's a space problem:
/dev/mapper/pve-data 1,7T 503G 1,2T 30% /var/lib/vz

should be enough.

The problem is, that the machine will remain locked, because the backup failed, and therefore the whole system is pretty unstable.
Maybe you know what i could do.
I didn't have these kind of problems before updating to 1.8.

Thanks
Sascha
 
Last edited:
Hi all,
i think i have a similar problem.

Lately when i want to backup my kvm machines i get this:

Jun 10 07:00:02 INFO: Starting Backup of VM 305 (openvz)
Jun 10 07:00:02 INFO: CTID 305 exist mounted running
Jun 10 07:00:02 INFO: status = CTID 305 exist mounted running
Jun 10 07:00:02 INFO: mode failure - unable to dump into snapshot (use option --dumpdir)
Jun 10 07:00:02 INFO: trying 'suspend' mode instead
Jun 10 07:00:02 INFO: backup mode: suspend
Jun 10 07:00:02 INFO: ionice priority: 7
Jun 10 07:00:02 INFO: starting first sync /var/lib/vz/private/305/ to /var/lib/vz/backups/taeglich/vzdump-openvz-305-2011_06_10-07_00_02.tmp
J...

you cannot create a LVM snapshot inside the same volume, this creates a loop. you need to specify a backup target outside /var/lib/vz
 
ups...that'd defo explain it...

Hm...what would be the best practice then to shrink /dev/mapper/pve-data and create a new backup-folder on the now free space?
would really appreciate to hear about your approach of doing such a thing.

thanx again
Sascha
 
store your backup on external NFS server. basically, it makes little sense to store your backups on a local drive.
 
Yes...of course, you're right.
I did so now! And it basically works.
One thing though:
The resulting tgz of my kvm-vm keeps to getting bigger and bigger?
Yesterday it had 173 GB, today it's 228 GB.
From yesterday to today noone worked though...
This results in very long backup times of course.

Any idea what i should do?

Thanks again
Sascha
 
Ok, Backup works now.
Thanx for the hint!

But still i don't get the size-issue.
The KVM-machine to be backuped contains 2 VDs (200 & 500 GB).
With df -h on this VM i get:
/dev/mapper/vg_ucs-rootfs 191G 6,3G 175G 4% /
tmpfs 5,8G 0 5,8G 0% /lib/init/rw
udev 10M 80K 10M 1% /dev
tmpfs 5,8G 0 5,8G 0% /dev/shm
/dev/sda1 99M 59M 36M 63% /boot
/dev/sdb1 138G 13G 119G 10% /home
/dev/sdb2 454G 127G 304G 30% /var/samba

So the actually used space is about 150 GB.

How come the resulting backup is about 230 GB then?
Should be much smaller, shouldn't it?
As it is also being compressed...

The only thing i could think of is that some of the unused data is still seen as data and so being backuped...
Can anyone confirm this behaviour and does anyone maybe have an idea how this could be fixed?


Thanks for your help
Sascha
 
Yes, the host cannot really tell used/unused data - I can just try to detect blocks containing zeros.

- Dietmar

Oh, i see...
So if i got that right from another thread here, sfill would be the tool to use?
Can i securely wipe the unused blocks from my kvm-debian macine with it?
And if yes, any idea how?
Or is an approach with dd maybe more suitable?

I don't want to do anything stupid, so i better ask here. I really don't want my "good data" to disappear...

Thank you for your answer
Sascha

---------------------

Edit:
tried with
sfill -vfz /home

but the thing don't seem to do the job...
Backup today is even bigger then before...

Please...your help is highly appreciated...i am running out of backup time!

Thank you very much
Sascha
 
Last edited:
Hi Dietmar,
of course the dir exists!

With this command the ooooo.ooo something file is created and the after a long while deleted again.
But the result is an even bigger backup.
Maybe this command doesn't fill the empty space with zeros but with digits still readable by the backup?

What would be the fastest and best way to release the "empty space" in your opinion?
thank you
Sascha
 
But you would say the command is ok?

looks ok to me (but I never used that myself).
Or is there any faster, maybe better way to achieve what we're after?

Some people simple create a large file with zeros using dd, then delete that file.
 
Yup...
sfill -llz /home
did the trick!
backup shrinked from 234 GB to 208 GB...
next weekend i'll do a
sfill -llz /var/
and i guess that will release another 60 GB or so...

Thank you very much for your assistance
Sascha
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!