backup fail lvm error

A

amravit

Guest
hello there
i received an error while i was testing backup

Detailed backup logs:

vzdump --quiet --node 2 --snapshot --compress --dumpdir /backup --mailto linuxegypt@gmail.com 101

101: Mar 18 01:10:01 INFO: Starting Backup of VM 101 (qemu)
101: Mar 18 01:10:01 INFO: status = running
101: Mar 18 01:10:01 INFO: creating lvm snapshot of /dev/mapper/pve-data ('/dev/pve/vzsnap')
101: Mar 18 01:10:01 INFO: Insufficient free extents (195) in volume group pve: 256 required
101: Mar 18 01:10:01 ERROR: Backup of VM 101 failed - command '/sbin/lvcreate --size 1024M --snapshot --name vzsnap /dev/pve/data' failed with exit code 5


i have space in both pve and backup here is df output

vps2:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/pve/root 95G 794M 89G 1% /
tmpfs 5.9G 0 5.9G 0% /lib/init/rw
udev 10M 68K 10M 1% /dev
tmpfs 5.9G 0 5.9G 0% /dev/shm
/dev/mapper/pve-data 690G 18G 672G 3% /var/lib/vz
/dev/sda1 496M 35M 436M 8% /boot
/dev/mapper/pve-backup
580G 198M 551G 1% /backup

any advice?
 
vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 4
Max PV 0
Cur PV 2
Act PV 2
VG Size 1.36 TB
PE Size 4.00 MB
Total PE 357572
Alloc PE / Size 357377 / 1.36 TB
Free PE / Size 195 / 780.00 MB
VG UUID hMLacV-w1WG-3SQd-e3xc-6ls4-Qwdn-V9Lhuo
 
To resolve the issue you could manually edit /etc/cron.d/vzdump and add --size option to it.

More complex way is to unmount LV "data", resize filesystem on it, resize the LV itself, thus gaining additional free physical extents (PE) in VG "pve". Make backups (with mode other than "snapshot") before doing this.
 
There is not enough free space in the VG.

Looks like this is not a standard installation?
its stander installation i just increased the size of pve data by adding new harddisk
and inside it i created logical partion which mount on /backup

iam reszing pve into one harddisk now

do u think the problem might be related that block allocation is 4 mg and the pve was 1.4 tb

regards
amr
 
do u think the problem might be related that block allocation is 4 mg and the pve was 1.4 tb

You increased the size of pve data, and missed to keep some free space on the VG (a snapshot needs some space on the VG).

By default, our installer reserver about 4GB to be used by snapshots.

- Dietmar
 
You increased the size of pve data, and missed to keep some free space on the VG (a snapshot needs some space on the VG).

By default, our installer reserver about 4GB to be used by snapshots.

- Dietmar

vps2:~# vgdisplay
--- Volume group ---
VG Name backup
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 698.63 GB
PE Size 4.00 MB
Total PE 178850
Alloc PE / Size 178688 / 698.00 GB
Free PE / Size 162 / 648.00 MB
VG UUID DlJtXa-fTFr-Yceg-g1Z4-LbR3-cO2z-fF1nU4

--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 10
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 698.13 GB
PE Size 4.00 MB
Total PE 178722
Alloc PE / Size 168193 / 657.00 GB
Free PE / Size 10529 / 41.13 GB
VG UUID hMLacV-w1WG-3SQd-e3xc-6ls4-Qwdn-V9Lhuo


thats after i done the action plan above what do u think
 
Whats the question?
is this enough space to have full function snapshot backup
iam asking as this is production environment of 2 nodes cluster and slave

2nd question is there any documentation what can be done if master went down what happens to slave servers under it?
and would be the best recovery plan is to force slave to master or what exactly


regards
 
2nd question is there any documentation what can be done if master went down what happens to slave servers under it?
and would be the best recovery plan is to force slave to master or what exactly


I believe you just force one or your slaves to be the master see forum post here
 
I am in a similiar position. I installed on a 20gig disk, with a mind to change the default storage location to a 320GB RAID1. That's easy enough, but I don't want to break the Web Interfaces Backup page.

Does the data NEED to be on /dev/pve/data? or does the Backup routine dynamically workout which partition /var/lib/vz is running on??
 
What my question is, does /var/lib/vz need to reside on /dev/pve/data or can it reside on, say /dev/anothervg/anotherlv ?

I could make /dev/pve/data another physical disk with some mucking around but was hoping to simply mount /var/lib/vz onto /dev/pve/data2 without breaking anything.

Looking under the covers I see PVE just lets vzdump sort out the snapshot and vzdump, looking at it, seems to find out what LV the VZ containers are running dynamically.

That's very cool.

So I answered my own question :D
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!