Backup error

nadir.latif

New Member
Nov 4, 2012
18
0
1
Hello,

We created a backup job in proxmox. The backup job runs on time but does not successfully backup all the vms. The error logs show following error for openvz based vms:

INFO: tar: ./lib64/libsemanage.so.1: Read error at byte 0, while reading 1536 bytes: Input/output error
INFO: Total bytes written: 670556160 (640MiB, 1.1MiB/s)
INFO: tar: Exiting with failure status due to previous errors
ERROR: Backup of VM 118 failed - command '(cd /mnt/vzsnap0/private/118;find . '(' -regex '^\.$' ')' -o '(' -type 's' -prune ')' -o -print0|sed 's/\\/\\\\/g'|tar cpf - --totals --sparse --numeric-owner --no-recursion --one-file-system --null -T -|lzop) >/mnt/pve/backup/dump/vzdump-openvz-118-2014_07_15-07_43_16.tar.dat' failed: exit code 2
INFO: Starting Backup of VM 103 (qemu)

For kvm vms we get a Timeout error. Also during the backup all the nodes turn red. all vms are being backed up using snapshot option. We are using 3.2-4 version of proxmox which is the development version.

Thanks,

Nadir Latif
 
Last edited:
Hello,

Output of pveversion -v is:

root@swarm:~# pveversion -v
proxmox-ve-2.6.32: 3.2-129 (running kernel: 2.6.32-30-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1
root@swarm:~#

Thanks,

Nadir Latif
 
what kind of storage are you using for backups?
post your /etc/pve/storage.cfg
 
we are using nfs based storage for backups. our /etc/pve/storage.cfg file is:

root@swarm:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content images,iso,vztmpl,rootdir
maxfiles 0


nfs: backup
path /mnt/pve/backup
server nas.bipmedia.com
export /mnt/Bipmedia_Storage_Cluster/backup
options vers=3
content backup
maxfiles 1


nfs: iso
path /mnt/pve/iso
server nas.bipmedia.com
export /mnt/Bipmedia_Storage_Cluster/iso
options vers=3
content iso
maxfiles 1

Thanks,

Nadir Latif
 
Hello,

We have another backup related problem. we configured backup storage on proxmox using nfs. If the proxmox cluster looses connectivity with the nfs server, all the proxmox nodes turn red. Is there a way to prevent this, so if we take our nfs server offline the proxmox nodes dont go red.

Mvh,
Nadir
 
If the proxmox cluster looses connectivity with the nfs server, all the proxmox nodes turn red. Is there a way to prevent this, so if we take our nfs server offline the proxmox nodes dont go red.

that's the expected and safe behaviour, I guess, if pve looses connectivity to a configured (so possibly used) storage.
how could you be warned if this was happening..? pve detects an unexpected disconnection, and goes red.
if you don't like the red cluster, remove the storage from the cluster (web gui, then check mounts) first, then put it offline.

Marco
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!