good morning,
since a few weeks, i think since i upgraded to 3.3, i have the problem that when the backup fails, it kills the guest entirely.
e.g.
in the eventlog of the guest that fails can be seen nothing but the info that the machine has stopped working on specific time (looks just the same when you pull the powerplug)
before the upgrade to 3.3 it all was good without any failed backup or stopped machine for months.
4 cluster setup, non HA
backup-storage is a ZFS Server, connected via NFS
earlier it wasn't a problem that the 4 servers were backuping at the same time, at the moment i think i will have to set a manual backup timeslot for every cluster-member to see if that is the problem
are there any known problems in such a setup?
therefore it would be nice if there would be the option in the backup-configuration that the backup takes only 1 backup at a time in the whole cluster, and not one on every cluster-member
thanks
since a few weeks, i think since i upgraded to 3.3, i have the problem that when the backup fails, it kills the guest entirely.
e.g.
Code:
INFO: Starting Backup of VM 205 (qemu)
INFO: status = running
INFO: update VM 205: -lock backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/dailybackup/dump/vzdump-qemu-205-2014_12_02-02_04_55.vma.lzo'
INFO: started backup task '5c79cab1-a8e2-4f1b-94b4-6444e6fb5e29'
INFO: status: 0% (399114240/80530636800), sparse 0% (4812800), duration 3, 133/131 MB/s
INFO: status: 1% (847642624/80530636800), sparse 0% (7905280), duration 7, 112/111 MB/s
INFO: status: 2% (1664090112/80530636800), sparse 0% (9674752), duration 13, 136/135 MB/s
INFO: status: 3% (2512519168/80530636800), sparse 0% (43184128), duration 20, 121/116 MB/s
INFO: status: 4% (3274702848/80530636800), sparse 0% (52264960), duration 27, 108/107 MB/s
INFO: status: 5% (4139384832/80530636800), sparse 0% (56594432), duration 35, 108/107 MB/s
ERROR: VM 205 not running
INFO: aborting backup job
ERROR: VM 205 not running
ERROR: Backup of VM 205 failed - VM 205 not running
in the eventlog of the guest that fails can be seen nothing but the info that the machine has stopped working on specific time (looks just the same when you pull the powerplug)
before the upgrade to 3.3 it all was good without any failed backup or stopped machine for months.
Code:
pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-34-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-34-pve: 2.6.32-139
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
4 cluster setup, non HA
backup-storage is a ZFS Server, connected via NFS
earlier it wasn't a problem that the 4 servers were backuping at the same time, at the moment i think i will have to set a manual backup timeslot for every cluster-member to see if that is the problem
are there any known problems in such a setup?
therefore it would be nice if there would be the option in the backup-configuration that the backup takes only 1 backup at a time in the whole cluster, and not one on every cluster-member
thanks