Stop backup of VM failed exit code 255 - and left VM powered down

timdonovan

Active Member
Feb 3, 2020
80
16
28
38
Hi,

Last night one of my VM's failed to backup, and worse, Proxmox left it in a powered off state. There is not much to go on from the logs, is there anywhere else to see what caused this?


Code:
Task viewer: Backup Job
INFO: starting new backup job: vzdump 100 101 103 104 --mode stop --mailnotification failure --compress zstd --quiet 1 --storage backup
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started at 2021-04-14 04:00:03
INFO: status = running
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: docker
INFO: include disk 'scsi0' 'local-zfs:vm-100-disk-1' 30G
INFO: stopping vm
INFO: VM quit/powerdown failed
ERROR: Backup of VM 100 failed - command 'qm shutdown 100 --skiplock --keepActive --timeout 600' failed: exit code 255
INFO: Failed at 2021-04-14 04:10:04

Looking at the VM task view, it looks like it waited 10 mins for the powerdown to happen.

AFAIK qemu-guest-agent was and is running fine in the VM and surely the backup job would resort to ACPI poweroff if qemu fails anyway.

Thanks.
 
  • Like
Reactions: q-wertz
Sorry to reply to such an old thread but I'm having this issue as well and not sure what's causing it. Two of my six VMs are affected. Also, manually initiated stop backups work just fine on them which further confuses me.

Any suggestions what may be going on with the scheduled stop backups that makes them fail when the manually initiated ones succeed?

Package versions:
Code:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-12
pve-kernel-5.13: 7.1-9
pve-kernel-5.11: 7.0-10
pve-kernel-5.13.19-6-pve: 5.13.19-14
pve-kernel-5.13.19-5-pve: 5.13.19-13
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-4-pve: 5.11.22-9
ceph-fuse: 15.2.14-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-6
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-3
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-7
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-5
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1

Thank you.
 
Last edited:
Sorry to reply to such an old thread but I'm having this issue as well and not sure what's causing it. Two of my six VMs are affected. Also, manually initiated stop backups work just fine on them which further confuses me.

Any suggestions what may be going on with the scheduled stop backups that makes them fail when the manually initiated ones succeed?

Package versions:
Code:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-12
pve-kernel-5.13: 7.1-9
pve-kernel-5.11: 7.0-10
pve-kernel-5.13.19-6-pve: 5.13.19-14
pve-kernel-5.13.19-5-pve: 5.13.19-13
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-4-pve: 5.11.22-9
ceph-fuse: 15.2.14-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-6
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-3
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-7
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-5
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1

Thank you.

try reading this if it may help you ?

https://pve.proxmox.com/wiki/QEMU/KVM_ACPI_Guest_Shutdown

also this post might help you resolve your issue

https://forum.proxmox.com/threads/bug-vm-dont-stop-shutdown.9020/
 
Last edited:
Thanks for the suggestions. Unfortunately installing acpid didn't help and the second link didn't apply to my situation.

I was able to catch the errors in one of the affected VMs while it tried an automated stop backup:

HxFhxNI.png

The above stop job lasted for several minutes.

Cxqyymm.png

4ZX3POr.png

Those messages repeated for another few minutes until the backup finally aborted giving the error stated in the OP and the VM was automatically stopped and left in that state.

I did realize the one thing the affected Ubuntu server VMs had in common was that I had installed the HWE kernel because I had planned to eventually pass-through GPUs to them for hardware assisted video encoding. I've reverted back to the default kernel on both though. I'll mark this as solved if it fixed the issue.
 
I did realize the one thing the affected Ubuntu server VMs had in common was that I had installed the HWE kernel because I had planned to eventually pass-through GPUs to them for hardware assisted video encoding. I've reverted back to the default kernel on both though. I'll mark this as solved if it fixed the issue.
Removing the HWE kernel and reverting back to the default kernel seems to have solved the problem. I probably shouldn't (and likely can't) mark this as solved, as I'm not the OP and the OP's underlying issue may have been different.
 
  • Like
Reactions: Spirog

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!