XYZ
with the VM ID of a stuck VM, can you check what the following show?systemctl status XYZ.scope
cat /proc/$(cat /var/run/qemu-server/XYZ.pid)/status | grep PPid
cat /proc/PPID/cmdline
i see this:Again, replacingXYZ
with the VM ID of a stuck VM, can you check what the following show?
and then using the result of the last commandCode:systemctl status XYZ.scope cat /proc/$(cat /var/run/qemu-server/XYZ.pid)/status | grep PPid
Code:cat /proc/PPID/cmdline
Are there any systemd settings you modified?
@fionai see this:
root@pve:~# systemctl status 134.scope
Unit 134.scope could not be found.
root@pve:~# cat /proc/$(cat /var/run/qemu-server/134.pid)/status | grep PPid
PPid: 1
root@pve:~# cat /proc/1/cmdline
/sbin/initroot@pve:~#
No any systemd settings dont modified
qemu.slice/ID.scope
systemd scope: https://git.proxmox.com/?p=qemu-ser...88629e1fa1b96f500ab902ccbaffb77;hb=HEAD#l5861@fionaI do have a patch in the works that would go back to getting the VM ID from the process's commandline rather than the cgroup file, which would be a workaround.
But the real question is why does the cgroup file look like it does in your case. We always run the QEMU command in theqemu.slice/ID.scope
systemd scope: https://git.proxmox.com/?p=qemu-ser...88629e1fa1b96f500ab902ccbaffb77;hb=HEAD#l5861
So there has to be a bug either in our code for setting this up or in systemd itself. But I wasn't able to reproduce the issue yet and am still investigating.
snapshot
mode backup. You should install and enable the guest agent for VMs that don't have it yet, so filesystem consistency is not an issue.@fionaYou can't downgrade Proxmox VE installation (or Debian) across major versions. As a workaround, you can usesnapshot
mode backup. You should install and enable the guest agent for VMs that don't have it yet, so filesystem consistency is not an issue.
Yes, if you think that is worth the effort. Is there a special reason you need to use@fiona
But I can make backup my VMs on backup server and after reinstall proxmox version 7.4 and restore my VMs
stop
mode backup?Can you explain me how to use this patch?Yes, if you think that is worth the effort. Is there a special reason you need to usestop
mode backup?
Patch to improve qmeventd: https://lists.proxmox.com/pipermail/pve-devel/2024-June/064210.html
It's intended to be reviewed by other developers and if they deem it acceptable, it will be applied and rolled out in a future version. You could apply and build it yourself, but do so at your own risk.Can you explain me how to use this patch?
maybe I should use special commands, or edit some configs ?
snapshot
mode backup is also a full backup and as long as the guest agent is installed and enabled, the filesystem status will be consistent too. Of course, there can be special applications that do require even more than that, e.g. databases which can be handled with a hook script.Do you see the "Is there a solution to this problem? This problem is observed on different hosts.
no matching qemu.slice cgroup entry
" error ?error parsing vmid for 1289723: no matching qemu.slice cgroup entry
Apr 06 21:25:42 prox03 qmeventd[2139]: could not get vmid from pid 1289723
yepDo you see the "no matching qemu.slice cgroup entry
" error ?
Apr 07 00:36:34 cl1 qmeventd[847]: error parsing vmid for 602656: no matching qemu.slice cgroup entry
Apr 07 00:36:34 cl1 qmeventd[847]: could not get vmid from pid 602656
qmstart:VMID
) and from around the time the issue started occurring.qmstart
task name is followed by the VM ID, but the numbers for the earlier messages are not that, so clarify this.The VMs were created through the web gui interface manually. No changes were made to proxmox. STOP method is used for backups. The console shows that the system hangs at the shutdown stage, but if you use RESUME it will shutdown correctly and start booting. In GUI the icon of this VM is highlighted. In the log tasks:Hi,
did you spawn the VMs manually or via Proxmox VE UI/API/CLI? In the latter case, please share excerpts from the system logs/journal around the time the VM was started (you can search forqmstart:602656
orqmstart:1289723
) and from around the time the issue started occurring.
EDIT: Do you have anything on the system that could modify/affect the systemd slices/scopes?
TASK ERROR: VM quit/powerdown failed
yesкорректно завершает работу виртуальной машины?
Windows (with agent) and Ubuntuis Windows VM ?
No. After trying to backup as VM hangs (not always) I have to manually press RESUMEis backup was OK ?
INFO: Starting Backup of VM 106 (qemu)
INFO: Backup started at 2025-04-07 02:19:00
INFO: status = running
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: alkiv-ubuntu
INFO: include disk 'scsi0' 'local:106/vm-106-disk-0.qcow2' 10G
INFO: stopping virtual guest
INFO: VM quit/powerdown failed
ERROR: Backup of VM 106 failed - command 'qm shutdown 106 --skiplock --keepActive --timeout 600' failed: exit code 255
INFO: Failed at 2025-04-07 02:29:00
INFO: Backup job finished with errors
TASK ERROR: job errors
start: 15 sec (to desktop)start and stop
We use essential cookies to make this site work, and optional cookies to enhance your experience.