backup issues

Aug 30, 2009
101
0
16
London, UK
ok so my install is on top of lenny for limitations of where our server is hosted

i set up backups and they haven't been running, i looked at one of the backup logs and this is what is in it
Code:
Sep 10 00:00:02 INFO: Starting Backup of VM 104 (openvz)
Sep 10 00:00:02 INFO: status = CTID 104 exist mounted running
Sep 10 00:00:02 INFO: creating lvm snapshot of /dev/mapper/proxmox-system ('/dev/proxmox/vzsnap')
Sep 10 00:00:02 INFO:   Insufficient free extents (2) in volume group proxmox: 256 required
Sep 10 00:00:02 ERROR: Backup of VM 104 failed - command '/sbin/lvcreate --size 1024M --snapshot --name vzsnap /dev/proxmox/system' failed with exit code 5

now i am using LVM all in one partition (except swap)

here is the outputs of various lvm stuff

Code:
proxmox:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               proxmox
  PV Size               1.36 TB / not usable 3.65 MB
  Allocatable           yes 
  PE Size (KByte)       4096
  Total PE              357563
  Free PE               2
  Allocated PE          357561
  PV UUID               ScIvaq-OEEG-OIsV-QfOJ-eU6O-pftc-iMjOxV

Code:
proxmox:~# lvdisplay
  --- Logical volume ---
  LV Name                /dev/proxmox/system
  VG Name                proxmox
  LV UUID                dqRB5v-VdRh-bqvp-HAVa-yFC7-XvoI-LMs1ko
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.36 TB
  Current LE             355653
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:0
   
  --- Logical volume ---
  LV Name                /dev/proxmox/swap
  VG Name                proxmox
  LV UUID                N0UFoa-mMtH-pRr7-xxDQ-bcuR-qJmZ-Uvxpcm
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                7.45 GB
  Current LE             1908
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:1

Code:
proxmox:~# vgdisplay
  --- Volume group ---
  VG Name               proxmox
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  11
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.36 TB
  PE Size               4.00 MB
  Total PE              357563
  Alloc PE / Size       357561 / 1.36 TB
  Free  PE / Size       2 / 8.00 MB
  VG UUID               7a9dnH-GU63-BQQQ-PjK9-VorR-5T2Z-2cEmIs

can any suggest what i need to do to get backups running again?
 
.....

Alloc PE / Size 357561 / 1.36 TB
Free PE / Size 2 / 8.00 MB
VG UUID 7a9dnH-GU63-BQQQ-PjK9-VorR-5T2Z-2cEmIs

[/CODE]can any suggest what i need to do to get backups running again?

looks like you got not enough space for storing the snapshot, only 8 MB. default installations uses 4 GB.
 
ok this is slightly confusing

the drive has over 1TB free, do you mean that PVE doesnt allocate about 4gb space so its unusable by the "system" but available for the lvm snapshots

can you tell me the commands i need to run to resize the lvm down to make 4gb at least free space not confident with lvm commands yet

also does it matter if a vm is larger than 4gb?
 
ok this is slightly confusing

the drive has over 1TB free, do you mean that PVE doesnt allocate about 4gb space so its unusable by the "system" but available for the lvm snapshots

can you tell me the commands i need to run to resize the lvm down to make 4gb at least free space not confident with lvm commands yet

also does it matter if a vm is larger than 4gb?

yes, its not that simple to configure - that´s the reason why our installer is doing this automatically :)

see LVM2 for more info, e.g. http://tldp.org/HOWTO/LVM-HOWTO/
 
i know that and would have preferred to run the installer myself, however i was limited in that we couldnt get physical access when we needed the server installed and the only available method was PXE/Netboot which is not yet implemented by PVE so had to do a debian lenny install

have resized the volume correctly now, i made 16GB available does it matter if my VM ever goes above this size?

also my next issue to resolve now that backups work is getting the email report to work, i have the following errors pop up in syslog:
Code:
Sep 12 11:08:42 proxmox vzdump[10899]: Finished Backup of VM 102 (00:00:52)
Sep 12 11:08:42 proxmox postfix/pickup[10684]: 2AB18168809: uid=0 from=<root>
Sep 12 11:08:42 proxmox postfix/cleanup[11042]: 2AB18168809: message-id=<20090912100842.2AB18168809@proxmox.christchurchlondon.org>
Sep 12 11:08:42 proxmox postfix/qmgr[10685]: 2AB18168809: from=<root@proxmox.christchurchlondon.org>, size=4591, nrcpt=1 (queue active)
Sep 12 11:08:42 proxmox postfix/pickup[10684]: 2DBA2168808: uid=0 from=<root>
Sep 12 11:08:42 proxmox postfix/cleanup[11042]: 2DBA2168808: message-id=<20090912100842.2DBA2168808@proxmox.christchurchlondon.org>
Sep 12 11:08:42 proxmox postfix/qmgr[10685]: 2DBA2168808: from=<root@proxmox.christchurchlondon.org>, size=781, nrcpt=1 (queue active)
Sep 12 11:08:42 proxmox postfix/cleanup[11042]: 2F0EE16880A: message-id=<20090912100842.2DBA2168808@proxmox.christchurchlondon.org>
Sep 12 11:08:42 proxmox postfix/error[11044]: 2AB18168809: to=<webmaster@christchurchlondon.org>, relay=none, delay=0.03, delays=0.02/0/0/0.01, dsn=5.0.0, status=bounced (christchurchlondon.org)
Sep 12 11:08:42 proxmox postfix/local[11045]: 2DBA2168808: to=<root@proxmox.christchurchlondon.org>, orig_to=<root>, relay=local, delay=101, delays=101/0/0/0.01, dsn=2.0.0, status=sent (forwarded as 2F0EE16880A)
Sep 12 11:08:42 proxmox postfix/qmgr[10685]: 2F0EE16880A: from=<root@proxmox.christchurchlondon.org>, size=943, nrcpt=1 (queue active)
Sep 12 11:08:42 proxmox postfix/qmgr[10685]: 2DBA2168808: removed
Sep 12 11:08:42 proxmox postfix/cleanup[11042]: 30FFB168808: message-id=<20090912100842.30FFB168808@proxmox.christchurchlondon.org>
Sep 12 11:08:42 proxmox postfix/bounce[11046]: 2AB18168809: sender non-delivery notification: 30FFB168808
Sep 12 11:08:42 proxmox postfix/qmgr[10685]: 30FFB168808: from=<>, size=6551, nrcpt=1 (queue active)
Sep 12 11:08:42 proxmox postfix/qmgr[10685]: 2AB18168809: removed
Sep 12 11:08:42 proxmox postfix/cleanup[11042]: 31914168809: message-id=<20090912100842.30FFB168808@proxmox.christchurchlondon.org>
Sep 12 11:08:42 proxmox postfix/error[11044]: 2F0EE16880A: to=<webmaster@christchurchlondon.org>, orig_to=<root>, relay=none, delay=0.02, delays=0.01/0/0/0.01, dsn=5.0.0, status=bounced (christchurchlondon.org)
Sep 12 11:08:42 proxmox postfix/local[11045]: 30FFB168808: to=<root@proxmox.christchurchlondon.org>, relay=local, delay=0.01, delays=0/0/0/0.01, dsn=2.0.0, status=sent (forwarded as 31914168809)
Sep 12 11:08:42 proxmox postfix/qmgr[10685]: 31914168809: from=<>, size=6713, nrcpt=1 (queue active)
Sep 12 11:08:42 proxmox postfix/qmgr[10685]: 30FFB168808: removed
Sep 12 11:08:42 proxmox postfix/cleanup[11042]: 348E416880C: message-id=<20090912100842.348E416880C@proxmox.christchurchlondon.org>
Sep 12 11:08:42 proxmox postfix/error[11044]: 31914168809: to=<webmaster@christchurchlondon.org>, orig_to=<root@proxmox.christchurchlondon.org>, relay=none, delay=0.02, delays=0.01/0/0/0.01, dsn=5.0.0, status=bounced (christchurchlondon.org)
Sep 12 11:08:42 proxmox postfix/bounce[11047]: 2F0EE16880A: sender non-delivery notification: 348E416880C
Sep 12 11:08:42 proxmox postfix/qmgr[10685]: 348E416880C: from=<>, size=2899, nrcpt=1 (queue active)
Sep 12 11:08:42 proxmox postfix/qmgr[10685]: 2F0EE16880A: removed
Sep 12 11:08:42 proxmox postfix/qmgr[10685]: 31914168809: removed
Sep 12 11:08:42 proxmox postfix/cleanup[11042]: 3794D1684A5: message-id=<20090912100842.348E416880C@proxmox.christchurchlondon.org>
Sep 12 11:08:42 proxmox postfix/local[11045]: 348E416880C: to=<root@proxmox.christchurchlondon.org>, relay=local, delay=0.01, delays=0.01/0/0/0, dsn=2.0.0, status=sent (forwarded as 3794D1684A5)
Sep 12 11:08:42 proxmox postfix/qmgr[10685]: 3794D1684A5: from=<>, size=3061, nrcpt=1 (queue active)
Sep 12 11:08:42 proxmox postfix/qmgr[10685]: 348E416880C: removed
Sep 12 11:08:42 proxmox postfix/error[11044]: 3794D1684A5: to=<webmaster@christchurchlondon.org>, orig_to=<root@proxmox.christchurchlondon.org>, relay=none, delay=0.01, delays=0/0/0/0.01, dsn=5.0.0, status=bounced (christchurchlondon.org)
Sep 12 11:08:42 proxmox postfix/qmgr[10685]: 3794D1684A5: removed
 
have resized the volume correctly now, i made 16GB available does it matter if my VM ever goes above this size?

No, because snapshots only saves changes. And by default VZDump only uses 1GB for the snapshot.

Try to test your mail system first using the 'mail' command, like:

Code:
mail -s testmail you@yourdomain.com </dev/null
 
running the command you reccomended in shell provided the following issue

Code:
-bash: mail: command not found

i resolved my issues by reconfiguring postfix to relay via our mailserver rather than default to error
 
Last edited:
I heave little problem with snapshot backup too,
from previous post i suppose my problem is with to small free space on volume group:

--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
.....
Total PE 356988
Alloc PE / Size 356864 / 1.36 TB
Free PE / Size 124 / 496.00 MB

this was happen because i change default disk structure created by installer:
--- Logical volume ---
LV Name /dev/pve/swap
VG Name pve
LV Size 24.00 GB

LV Name /dev/pve/root
VG Name pve
LV Size 50.00 GB

LV Name /dev/pve/data
VG Name pve
LV Size 1020.00 GB

LV Name /dev/pve/vm-101-disk-1
VG Name pve
LV Size 300.00 GB

Now if i'm try create backup using snapshot mode i've got similar error to this from first post:
Insufficient free extents (124) in volume group pve: 256 required 106: Dec 04 10:15:02 ERROR: Backup of VM 106 failed - command 'lvcreate --size 1024M --snapshot --name 'vzsnap-mimban-0' '/dev/pve/data'' failed with exit code 5

If i stop VM - backup was created successfully:
Detailed backup logs:

vzdump --quiet --snapshot --compress --storage backup2 --mailto xxxxxxx 106

106: Dec 04 10:26:01 INFO: Starting Backup of VM 106 (qemu)
106: Dec 04 10:26:01 INFO: stopped
106: Dec 04 10:26:01 INFO: status = stopped
106: Dec 04 10:26:01 INFO: backup mode: stop
106: Dec 04 10:26:01 INFO: bandwidth limit: 10240 KB/s
106: Dec 04 10:26:02 INFO: creating archive '/kopie/vzdump-qemu-106-2009_12_04-10_26_01.tgz'
106: Dec 04 10:26:02 INFO: adding '/kopie/vzdump-qemu-106-2009_12_04-10_26_01.tmp/qemu-server.conf' to archive ('qemu-server.conf')
106: Dec 04 10:26:02 INFO: adding '/var/lib/vz/images/106/vm-106-disk-2.qcow2' to archive ('vm-disk-ide0.qcow2')
106: Dec 04 10:27:19 INFO: Total bytes written: 248704512 (3.08 MiB/s)
106: Dec 04 10:27:19 INFO: archive file size: 73MB
106: Dec 04 10:27:19 INFO: Finished Backup of VM 106 (00:01:18)

How do i fix this problem ?
Resize something ?

Thanks for advice ...
 
Last edited:
simply free some space in the volume group (resize or delete existing logical volumes).

ok, i'm newbie in lvm ... did i safely use np: lvresize -L 40G /dev/pve/root on working system ?
or should stop all VM, boot rescuecd (or something) and then do resize ?
 
ok, i'm newbie in lvm ... did i safely use np: lvresize -L 40G /dev/pve/root on working system ?

never do that - there is a filesystem on the device!

or should stop all VM, boot rescuecd (or something) and then do resize ?

I guess that is to difficult to explain here. you first need to boot from rescue system, shrink the filesystem, then lvresize - quite difficult and dangerous task - maybe you better reinstall.
 
I guess that is to difficult to explain here. you first need to boot from rescue system, shrink the filesystem, then lvresize - quite difficult and dangerous task - maybe you better reinstall.

Its production enviorment ... but important qemu kvm guest work on another disk, i try umount pve-data disk, and shrink it. I've tested that opreration on home computer, and everything was all right ...

Thanks for advice
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!