Backup snapshot mode when there are lots of write access

arnolem

Active Member
Jan 22, 2013
29
2
43
Hello,

I'm installing a new server on Proxmox 2.2 and before using it in production, I am doing tests backup snapshot mode (for CT).
(*Intel Xeon E3 - 4 cores - 3.4GHz - 2x 2To SATA3 RAID1 - 32 Go DDR3*)

I try to do a test snapshot on a machine with a high rate of writing.

The test is simple, I run multiple parallel downloads :
Code:
wget -b -O file1.iso http://ftp.u-picardie.fr/mirror/ubuntu/releases//quantal/ubuntu-12.10-desktop-amd64.iso
wget -b -O file2.iso http://ftp.u-picardie.fr/mirror/ubuntu/releases//quantal/ubuntu-12.10-desktop-amd64.iso
wget -b -O file3.iso http://ftp.u-picardie.fr/mirror/ubuntu/releases//quantal/ubuntu-12.10-desktop-amd64.iso
[...]

When I run the snapshot, I have the following errors:
INFO: starting new backup job: vzdump 102 --remove 0 --mode snapshot --compress lzo --storage backup --node ns123456
INFO: Starting Backup of VM 102 (openvz)
INFO: CTID 102 exist mounted running
INFO: status = running
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating lvm snapshot of /dev/mapper/pve-lv1 ('/dev/pve/vzsnap-ns368405-0')
INFO: Logical volume "vzsnap-ns123456-0" created
INFO: creating archive '/var/lib/vz/dump//dump/vzdump-openvz-102-2013_01_22-10_44_15.tar.lzo'
INFO: tar: ./root/file51: File shrank by 44663086 bytes; padding with zeros
INFO: tar: ./root/file53: Read error at byte 0, while reading 1536 bytes: Input/output error
INFO: tar: ./root/file52: Read error at byte 0, while reading 8192 bytes: Input/output error
INFO: tar: ./root/wget-log.2: Read error at byte 0, while reading 9216 bytes: Input/output error
INFO: tar: ./root/.bashrc: Read error at byte 0, while reading 3106 bytes: Input/output error
INFO: tar: ./root/.profile: Read error at byte 0, while reading 140 bytes: Input/output error
INFO: tar: ./root/wget-log.1: Read error at byte 0, while reading 2048 bytes: Input/output error
INFO: tar: ./root/wget-log: Read error at byte 0, while reading 1024 bytes: Input/output error
INFO: tar: ./lib/i386-linux-gnu/libnih.so.1.0.0: Read error at byte 0, while reading 10240 bytes: Input/output error
INFO: tar: ./lib/i386-linux-gnu/libexpat.so.1.5.2: Read error at byte 0, while reading 5632 bytes: Input/output error
INFO: tar: ./lib/i386-linux-gnu/librt-2.15.so: Read error at byte 0, while reading 3584 bytes: Input/output error
[...]
INFO: Total bytes written: 518871040 (495MiB, 2.0MiB/s)
INFO: tar: Exiting with failure status due to previous errors
ERROR: Backup of VM 102 failed - command '(cd /mnt/vzsnap0/private/102;find . '(' -regex '^\.$' ')' -o '(' -type 's' -prune ')' -o -print0|sed 's/\\/\\\\/g'|tar cpf - --totals --sparse --numeric-owner --no-recursion --one-file-system --null -T -|lzop) >/var/lib/vz/dump//dump/vzdump-openvz-102-2013_01_22-10_44_15.tar.dat' failed: exit code 2
INFO: Backup job finished with errors
TASK ERROR: job errors

Here is the result of "vgdisplay" :
vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 51
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.80 TiB
PE Size 4.00 MiB
Total PE 471431
Alloc PE / Size 426929 / 1.63 TiB
Free PE / Size 44502 / 173.84 GiB
VG UUID vBHM7f-3ERu-xaCJ-4eKU-VzKJ-jWWN-WN0ANe

Do you have any feedback about snapshot mode when the disc contents change a lot?
The problem is it a problem on a memory allocation settings or other?

Edit : dietmar says :
The LVM snapshot runs out of space. You can increase snapshot size in /etc/vzdump.conf (see man vzdump, 'size' option).
But, what exactly is the "snapshot size"? How to calculate the size needed?
Is there a command for the change to take effect?


Thanks ;)

PS : I already posted this message yesterday but it was deleted by mistake
 
if you start a snapshot backup (with LVM), all writes goes to the snapshot.

so if you write 1 GB during the backup inside your VM or CT (e.g. by downloading ISO images), you need 1 GB snapshot space.
 
if you start a snapshot backup (with LVM), all writes goes to the snapshot.

so if you write 1 GB during the backup inside your VM or CT (e.g. by downloading ISO images), you need 1 GB snapshot space.

If I understood correctly :
"Free PE / Size 44502 / 173.84 GiB" is my free space without partition.
This space is used to write everything changed on disk during the backup (is mount to /dev/mapper/pve-lv1)
During the backup, files in the CT are copy to /var/lib/vz/dump/dump/vzdump-openvz-*.tar.lzo
At the end, all writes is merge and add to the tar.lzo and also to the CT.
After, the partition is removed
Right ?

If that's right, this would mean that :during the backup, the CT may have additional 173.84Go writing but i have download only 8 Go, not 173 Go ?

thank you for helping me understand
 
vzdump uses by default only 1 GB (of the Free PE)

if you want to use more (and you want this), you need to configure this in /etc/vzdump.conf

for details, see 'man vzdump'.
 
vzdump uses by default only 1 GB (of the Free PE)

if you want to use more (and you want this), you need to configure this in /etc/vzdump.conf

for details, see 'man vzdump'.

Tanks a lot Tom.

Can I set up the entire space?
In addition, I changed the size parameter, but how can I check if the configuration has been completed?


Thank you
 
Tanks a lot Tom.

Can I set up the entire space?
In addition, I changed the size parameter, but how can I check if the configuration has been completed?

Thank you

If i print lvdisplay during a backup, I can see my new partition :
LV Path /dev/pve/vzsnap-ns368405-0
LV Name vzsnap-ns368405-0
VG Name pve
LV UUID 4ADRAh-I5lL-czhJ-dlml-qrhI-s8q1-JOqoU1
LV Write Access read/write
LV Creation host, time ns368405, 2013-01-23 16:16:43 +0100
LV snapshot status active destination for lv1
LV Status available
# open 1
LV Size 886.45 GiB
Current LE 226930
COW-table size 168.95 GiB
COW-table LE 43250
Allocated to snapshot 1.51%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2

My new param is the COW-table size 168.95 GiB, Right ?
do you know another method to get the information without starting backup?

Thx
 
Tanks a lot Tom.

Can I set up the entire space?

if your backup snapshot takes all the space, nothing is left for additional snapshots. e.g. if your LVM is on a SAN and a second Proxmox VE node want to run a LVM snapshot backup you run into a problem.[/QUOTE]

In addition, I changed the size parameter, but how can I check if the configuration has been completed?
Thank you

if you run a backup, you can monitor the LVM snapshot with 'lvdisplay' - you should see the snapshot and also the size. if you write something inside the VM/CT during the backup, the snapshot will grow.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!