Proxmox 3.0 backup issue


Active Member
Jun 12, 2009

today i've got this issue within backup

lzop: Host is down: <stdout>

Backupspace is a smb-share

pveversion -v
pve-manager: 3.0-20 (pve-manager/3.0/0428106c)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-15
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-6
vncterm: 1.1-3
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-12
ksm-control-daemon: 1.1-1
any suggestions?

Kind regards

hehe ;)

that would be the easiest thing ;)
no it's not. It just could be that at the time of the backup the communication to the host was broken (short network downtime) - is this the only thing that runs a backup job in this state? how can i kill that task?

of course:
killall -9 vzdump

but this is still remaining:

 task UPID:csid:00052933:02203E49:51BD0002:vzdump::root@pam
also cannot be killed by pid

you have right, the bug is back in our systems. we have the same problem again.

do you have a idea to fix it? proxmox team does closed a bug entry and many other users have this problem without a solution. :(

Last edited:
Not sure if it is related, but I have a slightly different problem with the 3.0 backup.
When I backup a VM with the snapshot option, the load on the host node is normal but the load on this VM goes to the roof, I have to stop the backup, kill the VM and restart it.
Normally, this VM works very well, it is fast and the load is low, also other virtual machines on the same server are backed up normally.
The only difference between this VM and the others is that it is restored from a 1.9 Proxmox server.
This happens on a soft RAID 10 (never had any problems with software RAID and Proxmox) with ext4 filesystem, on a standalone host with 24 GB of RAM and the CFQ scheduler.

Could it be related to the same issue of this thread?
well the first thing a proxmox engineer will say is, that the issue comes from sw-raid. but this is not what you expect. i've seen and installed a lot of proxmox instances on sw-raid without any performance issue, sometimes the performance is better then with hw-raid.

so what i think, the load of the vm comes from the io-delay thats on the host in cause of the backup. a hw-raid reduces this io-delay because it has cache. but this is only the possible reason if the load is going down when the backup is ready.
the migration thing is the other possiblity. What i've already done is, create a new vm and copy the harddisk into that new vm, and run the new vm that will may fix your issue
Thanks MasterTH,

I will try to find a way to reduce the IOwait on this VM.
The disk is in RAW format, could it help to convert do QCOW?
i don't have any performance benchmarks with with different fileformats yet, maybe a proxmox engineer can answer this question.
but what would really help is the virtio-bustype


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!