[SOLVED] Qemu-server 8.3.14 prevents VM shutdown

badsectorlabs

New Member
Jul 10, 2025
2
0
1
On Proxmox 8.4.1 (installed today on top of Debian), the "Shutdown" command sends the VM to a black screen but the VM fails to power off. This happens on Linux and Windows VMs. This causes Packer VM templates to fail, as the shutdown is a required step before the VM can be converted to a template. Downgrading the qemu-server package to 8.3.12 or 8.3.13 solves the issue and allows VMs to power off.

pveversion --verbose below:
Code:
proxmox-ve: 8.4.0 (running kernel: 6.8.12-11-pve)
pve-manager: 8.4.1 (running version: 8.4.1/2a5fa54a8503f96d)
proxmox-kernel-helper: 8.1.1
proxmox-kernel-6.8.12-11-pve-signed: 6.8.12-11
proxmox-kernel-6.8: 6.8.12-11
ceph-fuse: 16.2.15+ds-0+deb12u1
corosync: 3.1.9-pve1
criu: 3.17.1-2+deb12u1
dnsmasq: 2.90-4~deb12u1
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx11
intel-microcode: 3.20250512.1~deb12u1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.30-pve2
libproxmox-acme-perl: 1.6.0
libproxmox-backup-qemu0: 1.5.1
libproxmox-rs-perl: 0.3.5
libpve-access-control: 8.2.2
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.1.1
libpve-cluster-perl: 8.1.1
libpve-common-perl: 8.3.1
libpve-guest-common-perl: 5.2.2
libpve-http-server-perl: 5.2.2
libpve-network-perl: 0.11.2
libpve-rs-perl: 0.9.4
libpve-storage-perl: 8.3.6
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.6.0-2
proxmox-backup-client: 3.4.2-1
proxmox-backup-file-restore: 3.4.2-1
proxmox-firewall: 0.7.1
proxmox-kernel-helper: 8.1.1
proxmox-mail-forward: 0.3.3
proxmox-mini-journalreader: 1.5
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.11
pve-cluster: 8.1.1
pve-container: 5.2.7
pve-docs: 8.4.0
pve-edk2-firmware: 4.2025.02-3
pve-esxi-import-tools: 0.7.4
pve-firewall: 5.1.2
pve-firmware: 3.15-4
pve-ha-manager: 4.0.7
pve-i18n: 3.4.5
pve-qemu-kvm: 9.2.0-6
pve-xtermjs: 5.5.0-2
qemu-server: 8.3.14
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.7-pve2
 
Last edited:
the "Shutdown" command
To clarify: you pressed the Shutdown button from the PVE GUI for that VM, or issued that command from within the VM?

Just tested on both Linux & Windows VMs (with & without GA) on qemu-server: 8.3.14 & Shutdown works as expected (both from PVE & within the VM as above).

I'm guessing your issue must be down to the fact that you installed on top of Debian & not directly, or caused by the same issue for that choice.
 
Seeing this same issue in our environment...

We build a number of proxmox hosts by installing Debian, and then installing Proxmox over the top.
Our automation then runs qm commands via SSH on the hypervisor to create VMs and configure them... during that customisation we issue qm reboot <vmID> commands, and this week, this has starting failing. Issuing a shutdown via the webUI also freezes the VM.

When using the command line, we eventually get the message closing with read buffer at /usr/share/perl5/IO/Multiplex.pm line 927.however, the `stop` or `reset` QM functionality works as expected.

If we log into the VM itself, and issue a sudo reboot command, this works as expected so we're thinking it's something in the QM agent rather than VM configuration?

EDIT: Just installed qemu-server version 8.3.13 on an affected host, and this seems to resolve our problem as well. This was done using sudo apt-get install qemu-server=8.3.13 and no reboot of the host was required.
 
Last edited:
Do you have qemu-guest-agent install on the VM?

If not then check if you have qemu-agent checked in the VM config.
If you checked qemu-agent and do not have the qemu-guest-agent installed and running, the shutdown will fail. (As it can not do it via qemu-guest-agent.)
 
I can't tell what has changed in qemu-server: 8.3.14 as it does not appear to be referenced in this changelog.

Here it appears due to a security advisory.
 
If you checked qemu-agent and do not have the qemu-guest-agent installed and running, the shutdown will fail.
Both posters suggest that on previous versions of qemu-server <8.3.14 this works. That would not be explained by your suggestion.
 
nothing in the diff between 8.3.13 and 8.3.14 looks suspicious.. could you maybe provide an example VM configuration and details what OS is running inside?
 
Output of qm config 100:

agent: 1
bios: ovmf
boot: c
bootdisk: scsi0
cipassword: **********
ciuser: ansible
cores: 2
cpu: cputype=host
efidisk0: local:100/vm-100-disk-1.raw,efitype=4m,pre-enrolled-keys=1,size=528K
ipconfig0: ip=**********/24,gw=**********
memory: 768
meta: creation-qemu=9.2.0,ctime=1752145554
name: **********
net0: virtio=BC:24:11:38:39:C9,bridge=vmbr0
onboot: 1
ostype: l26
scsi0: local:100/vm-100-disk-0.raw,size=10G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=17709fac-82f8-422a-b4e1-**********
sockets: 1
sshkeys: ssh-ed25519%20AAAA**********
tpmstate0: local:100/vm-100-disk-2.raw,size=4M,version=v2.0
vga: qxl
vmgenid: 7b9a6e3f-13ad-4e3f-988f-**********

It's running Alma Linux 9.6 installed using their cloud qcow image.
 
boot: c
bootdisk: scsi0
This doesn't make sense to me for a Proxmox VM with the available/listed disks.

cpu: cputype=host
This should be just cpu: host


ipconfig0:
I'm not sure this will be "passed along" correctly. See here but also here. I'm guessing this is just "orphaned" from the cloud-init image & in fact is not currently being used. (As is evident also by the lack of a configured CD Drive).

memory: 768
Wow! I believe that is half the advised minimum. I would not settle for less than 1024.

Anyway, I'm not sure how much any of this is your actual current issue - but may be relevant.

Good luck.
 
Hi DaveFisher,

I agree with @gfngfn256 that the config does not look ideal, are you manually editing it?

I just did a testrun with Almalinux on an older host (n1, configured via .ssh/config) still using qemu-server 8.3.12, that I then upgraded to use qemu-server 8.3.14, but can't make out any difference in behaviour.
Bash:
# initial situation
$> ssh n1 "dpkg -s qemu-server" | grep Version
Version: 8.3.12

This is the procedure I used for setting up the machine (I have DHCP on the bridge 'internal'):
Bash:
$> scp .ssh/id_ed25519.pub n1:
$> ssh n1 "qm create 8008 --cpu x86-64-v2-AES --cores 4 --memory 4096 --net0 bridge=internal,model=virtio,firewall=1 --scsihw virtio-scsi-single --agent 1 --ide2 rbd:cloudinit --ciuser alma --cipassword alma --sshkeys /root/id_ed25519.pub"
$> ssh n1 "qm disk import 8008 /mnt/pve/cephfs/import/AlmaLinux-9-GenericCloud-9.6-20250522.x86_64.qcow2 rbd"
$> ssh n1 "qm set 8008 --boot order=scsi0 --scsi0 rbd:vm-8008-disk-0"
This results in the following configuration:
Bash:
$> ssh n1 "qm config 8008"
agent: 1
boot: order=scsi0
cipassword: **********
ciuser: alma
cores: 4
cpu: x86-64-v2-AES
ide2: rbd:vm-8008-cloudinit,media=cdrom
memory: 4096
meta: creation-qemu=9.2.0,ctime=1752158166
net0: virtio=BC:24:11:80:9A:5A,bridge=internal,firewall=1
scsi0: rbd:vm-8008-disk-0,size=10G
scsihw: virtio-scsi-single
smbios1: uuid=acfee0b6-8cfa-4052-9cf8-ac9c5d2fba38
sshkeys: ssh-ed25519%20AAAAC3NzaC1lZDI1NTE5AAAAIOvNUo%2FARImoIwVVn3fGEj9YyX3OHxasVKxa6LISNQDn%20dherzig%40emma%0A
vmgenid: f266c6d8-90b1-4d40-880f-069e24a3510e

Start up the machine and retrieve its IP (please forgive the pvesh/jq chain, I just wanted to be able to show the sequence without 'hiding' any steps, and did not come up with anything better for now :P):
Bash:
$> ssh n1 "qm start 8008"
$> ssh n1 "pvesh get nodes/localhost/qemu/8008/agent/network-get-interfaces --output-format json" | jq '.result[] | select(.name=="eth0")' | sed 's|-|_|g' | jq '.ip_addresses[] | select(.ip_address_type=="ipv4")' | jq -r .ip_address
10.10.83.11

Dial into the machine and get version:
Bash:
$> ssh -J n1 alma@10.10.83.11
[alma@VM8008 ~]$ rpm -q almalinux-release
almalinux-release-9.6-1.el9.x86_64
[alma@VM8008 ~]$ exit
logout
Connection to 10.10.83.11 closed.

Check state from the 'outside' and shut it down:
Bash:
$> ssh n1 "qm list" | grep 8008
      8008 VM 8008              running    4096              10.00 218486  
$> ssh n1 "qm shutdown 8008"
$> ssh n1 "qm list" | grep 8008
      8008 VM 8008              stopped    4096              10.00 0

After upgrading/rebooting the server the situation stayed the same:
Bash:
$> ssh n1 "dpkg -s qemu-server" | grep Version
Version: 8.3.14
$> ssh n1 "qm start 8008"
$> ssh n1 "qm list" | grep 8008
      8008 VM 8008              running    4096              10.00 5575  
$> ssh n1 "qm shutdown 8008"
$> ssh n1 "qm list" | grep 8008
      8008 VM 8008              stopped    4096              10.00 0

Also an Ubuntu 24.04 VM (manually installed through the GUI, here the network is manually set up in the guest OS) behaves the same way:
Bash:
$> ssh n1 "qm config 104"
agent: 1
boot: order=scsi0;ide2;net0
cores: 4
cpu: x86-64-v2-AES
ide2: cephfs:iso/ubuntu-24.04.2-live-server-amd64.iso,media=cdrom,size=3137758K
memory: 8192
meta: creation-qemu=9.2.0,ctime=1752074350
name: microk8s01
net0: virtio=BC:24:11:07:EE:8F,bridge=vmbr0,firewall=1
net1: virtio=BC:24:11:B3:6C:54,bridge=cephbridge,firewall=1
numa: 0
ostype: l26
parent: w_jupyter
scsi0: rbd:vm-104-disk-0,discard=on,iothread=1,size=60G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=086065ad-ccf9-492c-98a5-0c33749157c8
sockets: 1
vmgenid: 1c9cd163-c135-4777-a7f7-319daaac104c
$> ssh n1 "qm start 104"
$> ssh n1 "qm list" | grep 104
       104 microk8s01           running    8192              60.00 9141   
$> ssh n1 "qm shutdown 104"
$> ssh n1 "qm list" | grep 104
       104 microk8s01           stopped    8192              60.00 0

I'd suggest focussing at the VM config and test if you run into problems with manually installed VMs of various other OSs (that support qemu-guest-agent) as well. You might also like using the the API as an powerful instrument to modify/power your machines [0].

[0] https://pve.proxmox.com/pve-docs/api-viewer/index.html
 
Last edited:
I've got a re-write of this process on my backlog, if the client allows time for it. I inherited this, so I'm not able to fully defend the choices made.

However, the basic process has been in place for two years now, working with Promox 7.x as well as 8.x and has only failed with the introduction of this latest version of qemu-server ... in my automation, I've install the 8.3.13 version and placed it on hold, and the automation runs through as expected.

The OP mentioned that this was happening in their environment where my particular configuration won't be an issue and for them, and it is impacting both Linux and Windows virtual machines, to me that suggests the issue is wider than my particular VM configuration.

I'll do what work I can to quickly improve the VM specification and respond back with any findings. Thanks.
 
Below is a Debian 11 VM config that I was using for testing (although the same issue was present on a Windows 11 VM). All VMs tested have qemu-guest-agent installed and previously worked fine with qemu-server < 8.3.14. Shutdown goes to a black screen and then hangs when initiated via the Proxmox web GUI "Shutdown" button (APCI shutdown) or if I run `shutdown -h now` inside the VM. We are seeing this across many hosts - all Proxmox installed on Debian 12 within the last 48 hours.

If I let the VM sit at the black screen I get the error for the Shutdown task:

Code:
QEMU Guest Agent is not running - VM 101 qmp command 'guest-ping' failed - got timeout
TASK ERROR: VM quit/powerdown failed - got timeout

Interesting finding: If I run apt install qemu-server=8.3.13 and then apt install qemu-server=8.3.14
without powering on the VM, then power on the VM, the shutdown works fine on 8.3.14. Perhaps it's not a bug in qemu-server but the package not being installed correctly or something not starting correctly?

Code:
agent: 1
boot: order=virtio0;net0
cores: 2
cpu: host
description:
kvm: 1
memory: 4096
meta: creation-qemu=9.2.0,ctime=1752104223
name: test
net0: virtio=BC:24:11:A8:0F:AA,bridge=vmbr1000
numa: 0
onboot: 0
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=872c6166-a86f-4d1e-bb8e-2945c7df873e
sockets: 1
virtio0: local:100/base-100-disk-0.qcow2/101/vm-101-disk-0.qcow2,cache=none,discard=on,iothread=1,replicate=0,size=200G
vmgenid: 07afacae-7203-49b1-9a90-5b3945cce4ee
 
The only thing I see common to both posters that have the issue is that they have installed Proxmox on Debian. Maybe someone else with a similar environment could chime in here.
 
Hi badsectorlabs, DaveFisher,


I could reproduce the issue. It's not directly qemu-server v8.3.14's code (which IS an enhancement), but we missed starting the services `qmeventd.service` and `pve-query-machine-capabilities.service` on install time. So on a fresh PVE (with 8.3.14) installed on top of Debian Bookworm, you'll run into the situation that these two services are not running:


Bash:
root@pvetopbookworm02:~# systemctl status qmeventd.service pve-query-machine-capabilities.service
○ qmeventd.service - PVE Qemu Event Daemon
     Loaded: loaded (/lib/systemd/system/qmeventd.service; disabled; preset: enabled)
     Active: inactive (dead)

○ pve-query-machine-capabilities.service - PVE Query Machine Capabilities
     Loaded: loaded (/lib/systemd/system/pve-query-machine-capabilities.service; disabled; preset: enabled)
     Active: inactive (dead)


Until we have it patched, you can quickly enable these services by running:

Bash:
root@pvetopbookworm02:~# systemctl enable --now qmeventd.service pve-query-machine-capabilities.service

Then the status check should read:

Bash:
root@pvetopbookworm02:~# systemctl status qmeventd.service pve-query-machine-capabilities.service
● qmeventd.service - PVE Qemu Event Daemon
     Loaded: loaded (/lib/systemd/system/qmeventd.service; enabled; preset: enabled)
     Active: active (running) since Fri 2025-07-11 18:08:32 CEST; 6s ago
    Process: 1645 ExecStart=/usr/sbin/qmeventd /var/run/qmeventd.sock (code=exited, status=0/SUCCESS)
   Main PID: 1647 (qmeventd)
      Tasks: 1 (limit: 9448)
     Memory: 296.0K
        CPU: 747us
     CGroup: /system.slice/qmeventd.service
             └─1647 /usr/sbin/qmeventd /var/run/qmeventd.sock

Jul 11 18:08:32 pvetopbookworm02 systemd[1]: Starting qmeventd.service - PVE Qemu Event Daemon...
Jul 11 18:08:32 pvetopbookworm02 systemd[1]: Started qmeventd.service - PVE Qemu Event Daemon.

● pve-query-machine-capabilities.service - PVE Query Machine Capabilities
     Loaded: loaded (/lib/systemd/system/pve-query-machine-capabilities.service; enabled; preset: enabled)
     Active: active (exited) since Fri 2025-07-11 18:08:32 CEST; 6s ago
    Process: 1646 ExecStart=/usr/libexec/qemu-server/query-machine-capabilities (code=exited, status=0/SUCCESS)
   Main PID: 1646 (code=exited, status=0/SUCCESS)
        CPU: 593us

Jul 11 18:08:32 pvetopbookworm02 systemd[1]: Starting pve-query-machine-capabilities.service - PVE Query Machine Capabilities...
Jul 11 18:08:32 pvetopbookworm02 systemd[1]: Finished pve-query-machine-capabilities.service - PVE Query Machine Capabilities.

At this point things should work as usual.

Intermediatly downgrading to 8.3.13 does the same thing (enabling the services), that's why it works (also with 8.3.14 reinstalled afterwards). Until we've patched the package, you can use both techniques as a stop-gap.

Let us know if this works for you!
 
Is there a permanent solution to the problem? After I start the services, they periodically stop and do not make backup copies and do not turn off normally.
(proxmox 8.4.1 install on debian 12).
 
I believe the just released (yesterday or the day before?) qemu-server v8.4.1 has just patched the above. I think I read this in the changelog.
 
Just found it here in this changelog for qemu-server 8.4.1 located here.
qemu-server (8.4.1) bookworm; urgency=medium

* revert shipping systemd unit in /usr for Bookworm to ensure all services
are started on installation.

-- Proxmox Support Team <support@proxmox.com> Mon, 14 Jul 2025 13:51:53 +0200
 
  • Like
Reactions: dherzig