Problems with scheduled backups

atinazzi

Active Member
Mar 15, 2010
57
1
26
I am having problems with scheduled backups:

The backup of VM105 started at 2.00am is still running after 14 hours. I backup the same VM manually yesterday and the task completed in 2 hours and 30 minutes.

The VM is an SBS2011 with 2 disks using the virtio driver:
vm-105-disk-3 - (where SBS is)
vm-105-disk-2 - data (empty)

disk 3 is 180 Gb raw
disk 2 is 500 Gb raw



both disks are connected via iSCSI

Following is the log:

INFO: trying to get global lock - waiting...
INFO: got global lock
INFO: starting new backup job: vzdump 105 106 107 108 109 217 --quiet 1 --mode snapshot --compress 1 --maxfiles 7 --storage Backups
INFO: skip external VMs: 106, 108, 109, 217
INFO: Starting Backup of VM 105 (qemu)
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 536870846464: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 536870903808: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 0: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 4096: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 536870846464: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 536870903808: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 0: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 4096: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 536870846464: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 536870903808: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 0: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 4096: Input/output error
INFO: creating archive '/mnt/pve/Backups/dump/vzdump-qemu-105-2012_02_25-04_59_58.tar.lzo'
INFO: adding '/mnt/pve/Backups/dump/vzdump-qemu-105-2012_02_25-04_59_58.tmp/qemu-server.conf' to archive ('qemu-server.conf')
INFO: adding '/dev/vg01/vm-105-disk-2' to archive ('vm-disk-virtio1.raw')

root@nd01-cl01:~# pveversion -v
pve-manager: 2.0-33 (pve-manager/2.0/c598d9e1)
running kernel: 2.6.32-7-pve
proxmox-ve-2.6.32: 2.0-60
pve-kernel-2.6.32-6-pve: 2.6.32-55
pve-kernel-2.6.32-7-pve: 2.6.32-60
lvm2: 2.02.88-2pve1
clvm: 2.02.88-2pve1
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-1
pve-cluster: 1.0-23
qemu-server: 2.0-20
pve-firmware: 1.0-15
libpve-common-perl: 1.0-14
libpve-access-control: 1.0-16
libpve-storage-perl: 2.0-12
vncterm: 1.0-2
vzctl: 3.0.30-2pve1
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-4
ksm-control-daemon: 1.1-1
 
not enough space for the snapshot? post the output of:

Code:
lvdisplay

and

Code:
pvdisplay
 
Just in case it may help... after 23 hours and 30 minutes I decided to stop the backup. Here is the log:

ERROR: Backup of VM 105 failed - command '/usr/lib/qemu-server/vmtar '/mnt/pve/Backups/dump/vzdump-qemu-105-2012_02_25-04_59_58.tmp/qemu-server.conf' 'qemu-server.conf' '/dev/vg01/vm-105-disk-2' 'vm-disk-virtio1.raw' '/dev/vg01/vm-105-disk-3' 'vm-disk-virtio0.raw'|lzop >/mnt/pve/Backups/dump/vzdump-qemu-105-2012_02_25-04_59_58.tar.dat' failed: interrupted by signal
INFO: Starting Backup of VM 107 (qemu)
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/Backups/dump/vzdump-qemu-107-2012_02_26-01_34_07.tar.lzo'
INFO: adding '/mnt/pve/Backups/dump/vzdump-qemu-107-2012_02_26-01_34_07.tmp/qemu-server.conf' to archive ('qemu-server.conf')
INFO: adding '/var/lib/vz/images/107/vm-107-disk-2.raw' to archive ('vm-disk-ide0.raw')

Below is the result of lvdisplay and pvdispaly as requested:

root@nd01-cl01:~# lvdisplay
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 536870846464: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 536870903808: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 0: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 4096: Input/output error
--- Logical volume ---
LV Name /dev/vg01/vm-109-disk-1
VG Name vg01
LV UUID nMDW9h-Jo7O-HSE8-fKyf-yjCU-1U6H-L0M4Lo
LV Write Access read/write
LV Status NOT available
LV Size 24.00 GiB
Current LE 6145
Segments 1
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Name /dev/vg01/vm-217-disk-1
VG Name vg01
LV UUID Et9hKy-jlrf-1nOs-oa72-BEh8-xfX0-xBWI8t
LV Write Access read/write
LV Status NOT available
LV Size 32.00 GiB
Current LE 8193
Segments 1
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Name /dev/vg01/vm-105-disk-1
VG Name vg01
LV UUID 7aJOtk-ugrb-Js72-xSfv-HYR0-wbTw-2oII46
LV Write Access read/write
LV Status NOT available
LV Size 160.00 GiB
Current LE 40961
Segments 1
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Name /dev/vg01/vm-105-disk-2
VG Name vg01
LV UUID WHwB6k-sSy3-ll4M-F6Vt-XXnP-dVQK-dZqXjh
LV Write Access read/write
LV snapshot status source of
/dev/vg01/vzsnap-nd01-cl01-0 [INACTIVE]
LV Status available
# open 1
LV Size 500.00 GiB
Current LE 128000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4

--- Logical volume ---
LV Name /dev/vg01/vm-105-disk-3
VG Name vg01
LV UUID 4Tzdel-K65b-aEeg-zGlD-fnd1-IYQ9-J2K2Ui
LV Write Access read/write
LV snapshot status source of
/dev/vg01/vzsnap-nd01-cl01-1 [active]
LV Status available
# open 0
LV Size 160.00 GiB
Current LE 40960
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:8

--- Logical volume ---
LV Name /dev/vg01/vm-106-disk-1
VG Name vg01
LV UUID vHno9G-77rq-Qyze-JMbo-3Vpk-qG8h-7S4Yi0
LV Write Access read/write
LV Status NOT available
LV Size 200.00 GiB
Current LE 51201
Segments 1
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Name /dev/vg01/vm-106-disk-4
VG Name vg01
LV UUID tSHec9-TBqV-kjKw-7KyA-v0KL-Chf3-5RpJsw
LV Write Access read/write
LV Status NOT available
LV Size 212.00 GiB
Current LE 54272
Segments 1
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Name /dev/vg01/vzsnap-nd01-cl01-0
VG Name vg01
LV UUID xOVOlG-lDfJ-YzMe-Z7Mg-dBQy-wkeQ-zTFyzJ
LV Write Access read/write
LV snapshot status INACTIVE destination for /dev/vg01/vm-105-disk-2
LV Status available
# open 0
LV Size 500.00 GiB
Current LE 128000
COW-table size 1.00 GiB
COW-table LE 256
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6

--- Logical volume ---
LV Name /dev/vg01/vzsnap-nd01-cl01-1
VG Name vg01
LV UUID DIU8qX-0NsK-QF4z-EPZ8-G9S7-5u8u-5Yr5A0
LV Write Access read/write
LV snapshot status active destination for /dev/vg01/vm-105-disk-3
LV Status available
# open 0
LV Size 160.00 GiB
Current LE 40960
COW-table size 1.00 GiB
COW-table LE 256
Allocated to snapshot 10.81%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:10

--- Logical volume ---
LV Name /dev/pve/swap
VG Name pve
LV UUID 0g93DK-nupn-L4gf-Kibx-b6Ey-H57Z-A2aWeT
LV Write Access read/write
LV Status available
# open 1
LV Size 13.00 GiB
Current LE 3328
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

--- Logical volume ---
LV Name /dev/pve/root
VG Name pve
LV UUID O9pAuy-zTD5-WfcO-nuV0-zIvf-K2ke-uILa2G
LV Write Access read/write
LV Status available
# open 1
LV Size 34.00 GiB
Current LE 8704
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Name /dev/pve/data
VG Name pve
LV UUID tf023W-v57A-3jIM-CaYw-NlKJ-H91U-cJ4o5b
LV Write Access read/write
LV Status available
# open 1
LV Size 85.20 GiB
Current LE 21811
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2

root@nd01-cl01:~# pvdisplay
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 536870846464: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 536870903808: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 0: Input/output error
/dev/vg01/vzsnap-nd01-cl01-0: read failed after 0 of 4096 at 4096: Input/output error
--- Physical volume ---
PV Name /dev/sdc
VG Name vg02
PV Size 2.94 TiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 771686
Free PE 771686
Allocated PE 0
PV UUID dPbdGc-UR64-dbvB-63Mi-eW8A-DMTI-Mi6IfU

--- Physical volume ---
PV Name /dev/sdb
VG Name vg01
PV Size 2.94 TiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 771660
Free PE 441416
Allocated PE 330244
PV UUID Muzs5x-FhAL-eWaQ-xntS-xO0Q-0fic-B4Xpob

--- Physical volume ---
PV Name /dev/sda2
VG Name pve
PV Size 136.20 GiB / not usable 3.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 34866
Free PE 1023
Allocated PE 33843
PV UUID dhcYVl-PwuH-N1Hr-BEFo-Biqp-6GW0-IgV9iN
 
Last edited:
Virtual disks are on iSCSI (SAN) server 1 and backups files sits on a NFS share on iSCSI server 2.
Server 1 log report no issue at the time of the event but server 2 reports frequent loss of connectivity with the Proxmox node.
That sort of error indicates that the issue is on the node side not on the iSCSI server side and it is normal when I reboot the node.
I cannot however pin-point one of these events at the exact time that the backup fails.
I will have to go back monitoring backups for a while and then post an update if the problem happens again.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!