Backup problems

drp

New Member
Dec 4, 2014
5
0
1
Hi all,


i am using proxmox 3.3-5
and i want to back my VM's but i get the following error.

INFO: starting new backup job: vzdump 101 --remove 0 --mode snapshot --compress gzip --storage usb-drive --node drproxmox
INFO: Starting Backup of VM 101 (openvz)
INFO: CTID 101 exist mounted running
INFO: status = running
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: trying to remove stale snapshot '/dev/pve/vzsnap-drproxmox-0'
INFO: umount: /mnt/vzsnap0: not mounted
ERROR: command 'umount /mnt/vzsnap0' failed: exit code 1
INFO: /dev/pve/vzsnap-drproxmox-0: read failed after 0 of 4096 at 1012819492864: Input/output error
INFO: /dev/pve/vzsnap-drproxmox-0: read failed after 0 of 4096 at 1012819550208: Input/output error
INFO: /dev/pve/vzsnap-drproxmox-0: read failed after 0 of 4096 at 0: Input/output error
INFO: /dev/pve/vzsnap-drproxmox-0: read failed after 0 of 4096 at 4096: Input/output error
INFO: Logical volume "vzsnap-drproxmox-0" successfully removed
INFO: creating lvm snapshot of /dev/mapper/pve-data ('/dev/pve/vzsnap-drproxmox-0')
INFO: Logical volume "vzsnap-drproxmox-0" created
INFO: creating archive '/home/usb-backup/dump/vzdump-openvz-101-2014_12_04-12_09_25.tar.gz'
INFO: tar: ./home/admin/admin_backups/6/user.admin.amnonk.tar.gz: File shrank by 26608297 bytes; padding with zeros
INFO: tar: ./home/admin/admin_backups/6/user.eladia.spaa.tar.gz: Cannot stat: Input/output error
INFO: tar: ./home/admin/admin_backups/6/user.ors113.bepoza.tar.gz: Cannot stat: Input/output error
INFO: tar: ./home/admin/admin_backups/6/user.eladia.eladlaw.tar.gz: Cannot stat: Input/output error
INFO: tar: ./home/admin/admin_backups/6/reseller.admin.kobih67.tar.gz: Cannot stat: Input/output error
INFO: tar: ./sbin/rpcbind: Read error at byte 0, while reading 1536 bytes: Input/output error
INFO: tar: ./sbin/blkid: Read error at byte 0, while reading 7680 bytes: Input/output error
INFO: tar: ./mnt/: Cannot savedir: Input/output error
INFO: Total bytes written: 112812636160 (106GiB, 13MiB/s)
INFO: tar: Exiting with failure status due to previous errors
ERROR: Backup of VM 101 failed - command '(cd /mnt/vzsnap0/private/101;find . '(' -regex '^\.$' ')' -o '(' -type 's' -prune ')' -o -print0|sed 's/\\/\\\\/g'|tar cpf - --totals --sparse --numeric-owner --no-recursion --one-file-system --null -T -|gzip) >/home/usb-backup/dump/vzdump-openvz-101-2014_12_04-12_09_25.tar.dat' failed: exit code 2
INFO: Backup job finished with errors
TASK ERROR: job errors


what can i do about it ?
 
Most likely you run out of LVM snapshot space? You can increase snapshot size by setting the 'size' parameter in /etc/vzdump.conf.

See

# man vzdump
 
less /etc/vzdump.conf
# vzdump default settings


tmpdir: /home/usb-backup/tmpdir
dumpdir: /home/usb-backup/dumptmp
#storage: STORAGE_ID
#mode: snapshot|suspend|stop
#bwlimit: KBPS
#ionice: PRI
#lockwait: MINUTES
#stopwait: MINUTES
#size: 50000
#maxfiles: N
#script: FILENAME
#exclude-path: PATHLIST
size: 16000
root@drproxmox:~# lvm
lvm> ^Z
[1]+ Stopped lvm
root@drproxmox:~# lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
data pve -wi-ao--- 943.26g
root pve -wi-ao--- 96.00g
swap pve -wi-ao--- 62.00g
root@drproxmox:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 1.09t 16.00g
root@drproxmox:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 6.3G 356K 6.3G 1% /run
/dev/mapper/pve-root 95G 12G 79G 13% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 13G 16M 13G 1% /run/shm
/dev/mapper/pve-data 929G 289G 641G 32% /var/lib/vz
/dev/sda1 495M 105M 365M 23% /boot
/dev/fuse 30M 20K 30M 1% /etc/pve
/var/lib/vz/private/100 300G 99G 202G 33% /var/lib/vz/root/100
none 7.9G 4.0K 7.9G 1% /var/lib/vz/root/100/dev
none 7.9G 0 7.9G 0% /var/lib/vz/root/100/dev/shm
none 7.9G 4.5M 7.9G 1% /var/lib/vz/root/100/tmp
/var/lib/vz/private/103 50G 29G 22G 57% /var/lib/vz/root/103
/var/lib/vz/private/104 50G 14G 37G 28% /var/lib/vz/root/104
none 4.0G 4.0K 4.0G 1% /var/lib/vz/root/104/dev
none 4.0G 0 4.0G 0% /var/lib/vz/root/104/dev/shm
/var/lib/vz/private/102 101G 18G 84G 18% /var/lib/vz/root/102
none 4.0G 4.0K 4.0G 1% /var/lib/vz/root/102/dev
none 4.0G 0 4.0G 0% /var/lib/vz/root/102/dev/shm
/var/lib/vz/private/101 150G 130G 21G 87% /var/lib/vz/root/101
none 7.9G 4.0K 7.9G 1% /var/lib/vz/root/101/dev
none 7.9G 0 7.9G 0% /var/lib/vz/root/101/dev/shm
none 7.9G 836K 7.9G 1% /var/lib/vz/root/101/tmp
/dev/sdb1 1.8T 1.3T 508G 71% /home/usb-backup
root@drproxmox:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 1.09t 16.00g
root@drproxmox:~# lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID Ao1rik-hzZZ-WL8U-rBsU-iz30-dQgp-iUhDpk
LV Write Access read/write
LV Creation host, time proxmox, 2013-09-30 19:46:51 +0300
LV Status available
# open 1
LV Size 62.00 GiB
Current LE 15872
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID jFCvaD-03WL-DS8h-lCkx-kM0t-kN8G-kFVYMJ
LV Write Access read/write
LV Creation host, time proxmox, 2013-09-30 19:46:52 +0300
LV Status available
# open 1
LV Size 96.00 GiB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Path /dev/pve/data
LV Name data
VG Name pve
LV UUID PYdKj5-eXlm-qkmw-5KzW-mu2c-MJFU-tZvLAm
LV Write Access read/write
LV Creation host, time proxmox, 2013-09-30 19:46:52 +0300
LV Status available
# open 1
LV Size 943.26 GiB
Current LE 241475
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3

root@drproxmox:~# ^C
root@drproxmox:~# vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 5000
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.09 TiB
PE Size 4.00 MiB
Total PE 286018
Alloc PE / Size 281923 / 1.08 TiB
Free PE / Size 4095 / 16.00 GiB
VG UUID omklXI-0GbM-0no3-qRQk-cM1k-aWSJ-UPmhMV
 
less /etc/vzdump.conf
# vzdump default settings


#size: 50000

default we use 1 GB (1000) for LVM snapshot space. increase it, for example to 8 GB.

size: 80000


Free PE / Size 4095 / 16.00 GiB
VG UUID omklXI-0GbM-0no3-qRQk-cM1k-aWSJ-UPmhMV

you have a total of 16 GB, so a snapshot can be up to 16 GB.
 
my VM's are more then 150GB size
what do you recommend my to do ?
if i understand i can't configure more then 16GN of LVM space
is there any way to re-size the LVM ?