snapshot mode

H

hestin

Guest
hi
does the snapshot mode of backup normally suspends and then resumes the vm after taking snapshot?
if so why is it said zero downtime for snapshot?
 
but my vms freezes and stops responding while backup

and in my log its seen like this

Nov 5 19:42:01 debian /usr/sbin/cron[3967]: (*system*vzdump) RELOAD (/etc/cron.d/vzdump)
Nov 5 19:42:01 debian /USR/SBIN/CRON[14163]: (root) CMD (vzdump --quiet --snapshot --compress --storage backup 102)
Nov 5 19:42:02 debian vzdump[14163]: INFO: starting new backup job: vzdump --quiet --snapshot --compress --storage backup 102
Nov 5 19:42:02 debian vzdump[14163]: INFO: Starting Backup of VM 102 (qemu)
Nov 5 19:42:03 debian qm[14181]: VM 102 suspend
Nov 5 19:42:15 debian proxwww[14196]: Starting new child 14196
Nov 5 19:42:57 debian proxwww[14231]: Starting new child 14231
Nov 5 19:43:25 debian proxwww[14255]: Starting new child 14255
Nov 5 19:43:38 debian postfix/qmgr[4113]: 8F2971E01B5: from=<root@debian.pro.com>, size=3709, nrcpt=1 (queue active)
Nov 5 19:43:59 debian postfix/smtp[14267]: 8F2971E01B5: to=<josehestinjose@gmail.com>, relay=none, delay=8537, delays=8516/0.37/20/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=gmail.com type=MX: Host not found, try again)
Nov 5 19:44:15 debian proxwww[14295]: Starting new child 14295
Nov 5 19:44:31 debian proxwww[14308]: Starting new child 14308
Nov 5 19:45:01 debian /USR/SBIN/CRON[14334]: (root) CMD (/usr/share/vzctl/scripts/vpsnetclean)
Nov 5 19:45:01 debian /USR/SBIN/CRON[14333]: (root) CMD (/usr/share/vzctl/scripts/vpsreboot)
Nov 5 19:45:39 debian proxwww[14382]: Starting new child 14382
Nov 5 19:45:55 debian proxwww[14396]: Starting new child 14396
Nov 5 19:46:52 debian proxwww[14442]: Starting new child 14442
Nov 5 19:47:11 debian proxwww[14458]: Starting new child 14458
Nov 5 19:47:26 debian qm[14473]: VM 102 resume
Nov 5 19:47:26 debian vzdump[14163]: INFO: Finished Backup of VM 102 (00:05:24)
Nov 5 19:47:26 debian vzdump[14163]: INFO: Backup job finished successfuly
Nov 5 19:47:27 debian proxwww[14442]: update ticket

What might be the problem?
 
Last edited by a moderator:
here is the content of log file
it shows a mode failure
thank you for making me notice that


Nov 05 19:42:02 INFO: Starting Backup of VM 102 (qemu)
Nov 05 19:42:02 INFO: running
Nov 05 19:42:02 INFO: status = running
Nov 05 19:42:03 INFO: mode failure - unable to dump into snapshot (use option --dumpdir)
Nov 05 19:42:03 INFO: trying 'suspend' mode instead
Nov 05 19:42:03 INFO: backup mode: suspend
Nov 05 19:42:03 INFO: bandwidth limit: 10240 KB/s
Nov 05 19:42:03 INFO: suspend vm
Nov 05 19:42:03 INFO: creating archive '/backup/vzdump-qemu-102-2009_11_05-19_42_02.tgz'
Nov 05 19:42:03 INFO: adding '/backup/vzdump-qemu-102-2009_11_05-19_42_02.tmp/qemu-server.conf' to archive ('qemu-server.conf')
Nov 05 19:42:03 INFO: adding '/var/lib/vz/images/102/vm-102-disk-1.qcow2' to archive ('vm-disk-scsi0.qcow2')
Nov 05 19:47:24 INFO: Total bytes written: 2107411968 (6.26 MiB/s)
Nov 05 19:47:24 INFO: archive file size: 733MB
Nov 05 19:47:26 INFO: resume vm
Nov 05 19:47:26 INFO: vm is online again after 323 seconds
Nov 05 19:47:26 INFO: Finished Backup of VM 102 (00:05:24)
 
Nov 05 19:42:03 INFO: mode failure - unable to dump into snapshot (use option --dumpdir)
Nov 05 19:42:03 INFO: trying 'suspend' mode instead

Please specify another backup location (outside the lvm groups).

It is a bad idea to store the backup into the volume you shapshoted, because that would blow up the space needed by the snapshot.
 
thanks for your directions dietmar.
i tried the snapshot with an NFS mount.
but it said the snapshot cannot be taken since thers no free space available in the lvm group.
i think its because i assigned the whole space in the volume group to a single lvm partition.

hers the log
Nov 06 01:16:01 INFO: Starting Backup of VM 101 (qemu)
Nov 06 01:16:01 INFO: running
Nov 06 01:16:01 INFO: status = running
Nov 06 01:16:02 INFO: backup mode: snapshot
Nov 06 01:16:02 INFO: bandwidth limit: 10240 KB/s
Nov 06 01:16:02 INFO: /dev/hdb: read failed after 0 of 4096 at 0: Input/output error
Nov 06 01:16:02 INFO: Insufficient free extents (0) in volume group Vgroup1: 256 required
Nov 06 01:16:02 ERROR: Backup of VM 101 failed - command 'lvcreate --size 1024M --snapshot --name 'vzsnap-debian-0' '/dev/Vgroup1/lv1'' failed with exit code 5


how much free space in the volume group is usually required to take an lvm snapshot to an NFS mount or another mounted hdd ?
 
Last edited by a moderator:
how much free space in the volume group is usually required to take an lvm snapshot to an NFS mount or another mounted hdd ?

We use 1GB for the snapshot by default (--size option). (But you can't make a snapshot of an nfs mount (nfs does not support snapshots)).
 
no no. i was taking the snapshot to an nfs mount(the backup storage space is an nfs mount). isnt that possible?
 
Hi,

I need some clarification about this because suddenly got the same problem as the original poster and now might need to reconsider the backup setup:
Please specify another backup location (outside the lvm groups).

It is a bad idea to store the backup into the volume you shapshoted, because that would blow up the space needed by the snapshot.
1. It is definitly a bad idea to store a backup of a LV in itself.
2. Is it a bad idea to store the backup in a special backup LV that is part of the VG in which the LV for which a snapshot backup is created lives?
3. Is a similar setup like in 2 possible but each LV in a separate VG?
4. Or should LVs as backup destination be even avoided entirely?

Thanks
 
pls open a new thread if you got a new question. but before, search the forum, there are a lot of threads dealing with backup configurations.
 
Hi Tom,

As my questions were related to the OP's problem, I decided to extend this thread in the hope that others might find it (and hence hopefully the solution) based on the OP's problem description.
I really searched the forums, got some ideas, but no definite answers. There is still a lot of confusion regarding this topic that's why I need clarification... :)
My setup is one large VG containing several LVs, some of them contain VMs (raw), others filesystems, or the swap partition for the proxmox system. One LV (also in the same VG) is used as a backup volume, formatted with a conventional filesystem (ext3) and mounted to /backup. This configuration initially worked with snapshot backups but now I anticipate the same problems as the OP. Thus, I'd be glad to get some answers to the questions of my previous post.

Thanks
fatzopilot
 
Hi,

I found my mistake: I just had forgotten to change fstab so that the LV containing the backups was not mounted on the next reboot. Since this was ommitted, proxmox used the root partition for the backup which definitly does not work for snapshots. Now that the backup LV is used again, snapshots make no problems. So to answer my own question:It is possible to have a backup LV that is part of the VG in which the LV for which a snapshot backup is created lives. (Answer 2) :)

Cheers
fatzopilot
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!