vzdump snapshots

peter

Member
Nov 25, 2008
31
0
6
I've been spending most of the day reading up on vmdump snapshots and I'm still a little confused.

Here's how my PVE1.3 server is configured

#pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name pve
PV Size 930.83 GB / not usable 2.77 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 238293
Free PE 1022
Allocated PE 237271
PV UUID KtjxRZ-yLhZ-GS10-fI46-qLXO-eINc-h822gC


# lvdisplay
--- Logical volume ---
LV Name /dev/pve/swap
VG Name pve
LV UUID Gjw3iC-0zc3-c9Mk-bS7R-0NUY-3KLZ-e4C7hM
LV Write Access read/write
LV Status available
# open 1
LV Size 4.00 GB
Current LE 1024
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:0

--- Logical volume ---
LV Name /dev/pve/root
VG Name pve
LV UUID L4vesU-ziK4-oAdE-QAAq-RRYm-dfTz-KC2LJT
LV Write Access read/write
LV Status available
# open 1
LV Size 96.00 GB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:1

--- Logical volume ---
LV Name /dev/pve/data
VG Name pve
LV UUID yZ2dag-MQxH-F29J-xVL3-ZHRd-95oN-X1tH04
LV Write Access read/write
LV Status available
# open 1
LV Size 826.84 GB
Current LE 211671
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:2

# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/pve/root 95G 8.7G 82G 10% /
tmpfs 11G 0 11G 0% /lib/init/rw
udev 10M 2.7M 7.4M 27% /dev
tmpfs 11G 0 11G 0% /dev/shm
/dev/mapper/pve-data 814G 406G 409G 50% /var/lib/vz
/dev/sda1 496M 85M 387M 18% /boot


Now, I understand why I can't simply do

vzdump --snapshot 100

as the dump file will be written to /var/lib/vz/dump (I don't know why this is made the default location) which is in the same logical volume as the VM to back up.

However, if I do

vzdump --snapshot --dumpdir /var/tmp 100

The snapshot LV created is the same size as /var/lib/vz, even though it's supposed to default to 1024MB. Everything appears to run/complete successfully, but I'm not confident that it's working as it should.

What am I missing here?
 
Ah, I think the confusion I'm having is that the mounted snapshot is the same size as the logical volume it is snapshotting, rather than the size of the snapshot itself.

Is that right?
 
I have a follow-up question.

I created an iSCSI VG that is 20GB. Then I created 2 KVM virtual machines on that volume group, each with it's own logical volume. The first (VMID 102) is 5GB. The second (VMID 103) is 10GB. So I have 5GB of free space on the VG.

'vzdump 102' successfully does a snapshot backup.

'vzdump 103' fails with this error:

ERROR: Backup of VM 103 failed - command 'lvcreate --size 1024M --snapshot --name 'vzsnap-prism-0' '/dev/VolumeTD2/vm-103-disk-1'' failed with exit code 5
So does that mean I have to have at least 10GB of free space on the VG (the size of the largest logical volume)?

If so, I am confused. I thought we would only be recording changes on the snapshot volume. The man page for lvcreate says "The snapshot does not need the same amount of storage the origin has. In a typical scenario, 15-20% might be enough." Is there any way to have it create a logical volume of the size specified in the lvcreate command, i.e. 1024M?

Thanks.
 
'vzdump 103' fails with this error:

What is the output of 'lvs' and 'vgs' at that time?

So does that mean I have to have at least 10GB of free space on the VG (the size of the largest logical volume)?

no - default snapshot size is 1GB (parameter --size)
 
Here is the output of lvs:

# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
MGT VG_XenStorage-b61c954f-276c-57fc-baab-ee6a53141257 -wi--- 4.00M
VHD-201d33dd-6d36-4da9-95fa-c06ecc4d50eb VG_XenStorage-b61c954f-276c-57fc-baab-ee6a53141257 -wi--- 19.54G
vm-102-disk-1 VolumeTD2 -wi-ao 5.00G
vm-103-disk-1 VolumeTD2 -wi-a- 10.00G
data pve -wi-ao 342.26G
root pve -wi-ao 96.00G
swap pve -wi-ao 23.00G
Here is the output of vgs. I'm working, presumably, with VolumeTD2:

# vgs
VG #PV #LV #SN Attr VSize VFree
VG_XenStorage-b61c954f-276c-57fc-baab-ee6a53141257 1 2 0 wz--n- 19.99G 452.00M
VolumeTD2 1 2 0 wz--n- 20.00G 5.00G
pve 1 3 0 wz--n- 465.26G 4.00G
Thank you!
 
I can do a manual snapshot of the 5GB lv. I can't with the 10GB. Here is the result:

# lvcreate --size 1024M --snapshot --name snap /dev/VolumeTD2/vm-103-disk-1
device-mapper: create ioctl failed: Device or resource busy
Failed to suspend origin vm-103-disk-1
 
The volume group was created in the Storage section of the Configuration interface in Proxmox. The logical volumes were created by proxmox when I created 2 virtual machines on that volume group.

I rebooted the proxmox server today. When it came back up I couldn't boot either machine. I started reinstalling one and the installer was having problems with the way the partition table was setup...So I decided to scratch the vm's and start them over.

I am now able to do vzdump in snapshot mode for both vm's.

I'm not sure what it was. I believe I had created the vm's in proxmox 1.4 and later upgraded to 1.5...but I'm not sure about that. So I'm not really sure what caused the problem. I am grateful that it is working though. Hopefully it doesn't happen again.

So in summary, the fix for me was to reboot, then delete and recreate the vm's.

Thanks for your help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!