Snapshot process on RAW files

mcmyst

Member
Dec 12, 2012
47
2
6
We are running a Proxmox VE 2.2 on a three node cluster. For VM we use KVM on RAW files. For now we are using Stop backup mode but on a specific VM the backup is taking too long (20 minutes of downtime) because of the size of the disk : 50Gb.

I would like to use Snapshot feature, I have enough space on my root partition to hold the backup in a different location than /var/lib/vz. But I am wondering how the snapshot is working, because it also takes around 20minutes to make it but the VM is still running. So if a file is created just after the beginning of the snapshot, is it going to be placed into the snapshot ?

I see on VZDUMP documentation that Snapshot feature us LVM snapshot to backup a VM. But my VolumeGroup don't have any space left to created LVM partitions to hold the backup. And when I take a look at the backup process I can see this line:
Code:
[COLOR=#000000][FONT=tahoma]INFO:   Logical volume "vzsnap-****-0" created[/FONT][/COLOR]


So an LVM partition is really created but I can't anderstand where.

here is my storage config:
Code:
cat /etc/pve/storage.cfg 
dir: local
    path /var/lib/vz
    content images,iso,vztmpl,rootdir
    maxfiles 0


dir: backup_VM
    path /backup/
    content backup
    maxfiles 5

df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/pve-root  234G   80G  142G  36% /
tmpfs                 3.9G     0  3.9G   0% /lib/init/rw
udev                  3.9G  264K  3.9G   1% /dev
tmpfs                 3.9G   44M  3.9G   2% /dev/shm
/dev/sda1             495M   35M  436M   8% /boot
/dev/fuse              30M   32K   30M   1% /etc/pve
/dev/mapper/pve-data  296G   38G  258G  13% /var/lib/vz

lvscan
  ACTIVE            '/dev/pve/swap' [7.00 GiB] inherit
  ACTIVE            '/dev/pve/root' [237.25 GiB] inherit
  ACTIVE            '/dev/pve/data' [300.00 GiB] inherit

pvscan 
  PV /dev/sda2   VG pve   lvm2 [557.25 GiB / 13.00 GiB free]
  Total: 1 [557.25 GiB] / in use: 1 [557.25 GiB] / in no VG: 0 [0   ]

Do you have an idea ?

Thank you
 
But the VM disk is 50 Gb big, and there are 25 Gb of data inside. When the backup ends, the archive is 17Gb big (compressed with LZO). So I don't understand how 13GiB would be enough ?
 
Hi,
the lvm-snapshot "freeze" the data-lv to readonly (no changes, so that is save to backup content) and use the sanpshot-lv for all changes (writes are going to the snapshot).
You run in trouble only, if you wrote more than 13GiB during the backup-run.

After backup the snapshot will deleted, mean the writen changes are commited to the original lv.

Udo
 
Last edited:
I don't think that I am going to write 13GiB in few minutes but I will see if I can increase it to 20 or 30 GiB for safety.
Thank you for the details !
 
Ok, I guess I should only fill up 13GiB if I have a big database server running on my PVE... Which is not the case right now :).
 
I hope is the contrary, writes are on the data image, and a copy of the old data is wrote on the snapshot image. This way if the backup is interrupted for some error, you can just safely remove the snapshot volume that contains the old data and you don't loose anything you wrote in the meantime!
 
I hope is the contrary, writes are on the data image, and a copy of the old data is wrote on the snapshot image. This way if the backup is interrupted for some error, you can just safely remove the snapshot volume that contains the old data and you don't loose anything you wrote in the meantime!
Hi,
you are right! I described Redirect-on-Write but LVM use Copy-On-Write (no need to commit all writes at removing the snapshot).

Udo
 
During the backup last nigth it seems that I ran out of space :
Code:
INFO: ERROR: incomplete read detected
ERROR: Backup of VM 106 failed -  command '/usr/lib/qemu-server/vmtar   '/backup//dump/vzdump-qemu-106-2013_02_12-03_10_42.tmp/qemu-server.conf'  'qemu-server.conf' '/mnt/vzsnap0/images/106/vm-106-disk-1.raw'  'vm-disk-virtio0.raw'|lzop  >/backup//dump/vzdump-qemu-106-2013_02_12-03_10_42.tar.dat' failed:  exit code 255

How can I verify that vzdump is using the 13GiB of free space, is there any kind of default value ?
I have looked in the /etc/vzdump.conf file but everything is comented out.
 
Ok so I have placed "size: 10240 MB", and when I take a look at the cron:
Code:
root vzdump --quiet 1 --mode snapshot --mailto m.flye@siea.fr --all 1 --node **** --compress lzo --storage backup_VM

It is not making reference to the vzdump.conf file, should I change it or is it defaulted to /etc/vzdump.conf ?
 
Thank you, it is looking much better now, I have launch a backup an we can see the tm LVM partition :
Code:
lvscan
  ACTIVE            '/dev/pve/swap' [7.00 GiB] inherit
  ACTIVE            '/dev/pve/root' [237.25 GiB] inherit
  ACTIVE   Original '/dev/pve/data' [300.00 GiB] inherit
  ACTIVE   Snapshot '/dev/pve/vzsnap-*****-0' [10.00 GiB] inherit
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!