VM doesn't start Proxmox 6 - timeout waiting on systemd

Same story here. Are you booting your VM off a USB device? That's my scenario; VM will no longer boot after most recent update. (Running 5.13.19-4-pve where it now fails; came from 5.13.19-3 where it worked.)

Update: Reverting back to 5.13.19-3 for now which fixed the issue. USB device passthrough working again. Something appears to be very wrong with 5.13.19-4.
No proxmox is installed on 2 hdds, mirrored and vms in a raidz pool. I reverted back to 5.13.19-3 like you did and its working for me too!
 
Last edited:
Same issue after the upgrade to pve-kernel-5.13.19-4-pve.
+1

Adding a little bit of context.

I'm running PVE 7.1-10 with three VMs. Home Assistant and Windows 11 VM starts just fine, but my Fedora35 VM won't start. HA starts first, Win11 with a 300s startup delay. Normally the Fedora VM starts inbetween with a 30s delay. After I upgraded to 5.13.19-4 and rebooted PVE, Fedora won't start. I have rebooted multiple times with the same result. Changeing the auto start to manually do not help.

At the first start on my Fedora35 VM I get the following error:
TASK ERROR: start failed: command '/usr/bin/kvm -id 135 -name F35Prod -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/135.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/135.pid -daemonize -smbios 'type=1,uuid=--ABC123--' -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,size=131072,file=/dev/zvol/zfs1/vm-135-disk-0' -global 'ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off' -smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/135.vnc,password=on' -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep -m 6144 -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg -device 'vmgenid,guid=--ABC123--' -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' -device 'usb-host,vendorid=0x0e8d,productid=0x1887,id=usb0' -device 'VGA,id=vga,bus=pcie.0,addr=0x1' -chardev 'socket,path=/var/run/qemu-server/135.qga,server=on,wait=off,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:a15be791c279' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/zvol/zfs1/vm-135-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap135i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=--ABC123--,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=101' -machine 'type=q35+pve0'' failed: got timeout

If I try to start the VM a second time I get the following error:
TASK ERROR: timeout waiting on systemd
 
Last edited:
+1

Adding a little bit of context.

failed: got timeout[/ICODE]

If I try to start the VM a second time I get the following error:
TASK ERROR: timeout waiting on systemd

I also received these errors which disappeared after rolling back to 5.13.19-3.

Question for you @depen, and I should probably split this off into a separate topic, were you actually able to reboot your Proxmox server after these errors?

I was not. Telling the server to shutdown with
Code:
reboot 0
or using the shutdown option in the GUI resulted in a loss of SSH connection with a server that would not actually restart. I had to kill power to reboot.
 
The error "TASK ERROR: timeout waiting on systemd" occurs with all VMs that have been assigned a USB (passthrough) device as hardware. A remedy (as a workaround) can be found by

1. deleting the USB entry in the VM's hardware and then
2. testing the restart. If this works
3.a. you are (depending on the necessity of the USB hardware) further on and can use the VM (without the USB Hardware) again or
3b. if this still does not work, you have to

4. clone the VM (without USB hardware) and then
5. = 2. restart the clone. Attention, of course, the "original" must be switched off / shut down.


By the way, we took this as an opportunity to try out virtualhere.com ... perfect! This also solves the challenge with a cluster and migrating VMs across fixed machines!
 
Last edited:
I also received these errors which disappeared after rolling back to 5.13.19-3.

Question for you @depen, and I should probably split this off into a separate topic, were you actually able to reboot your Proxmox server after these errors?

I was not. Telling the server to shutdown with
Code:
reboot 0
or using the shutdown option in the GUI resulted in a loss of SSH connection with a server that would not actually restart. I had to kill power to reboot.

I also had to power cycle my server :-(


The error "TASK ERROR: timeout waiting on systemd" occurs with all VMs that have been assigned a USB (passthrough) device as hardware. A remedy (as a workaround) can be found by

1. deleting the USB entry in the VM's hardware and then
2. testing the restart. If this works
3.a. you are (depending on the necessity of the USB hardware) further on and can use the VM (without the USB Hardware) again or
3b. if this still does not work, you have to

4. clone the VM (without USB hardware) and then
5. = 2. restart the clone. Attention, of course, the "original" must be switched off / shut down.


By the way, we took this as an opportunity to try out virtualhere.com ... perfect! This also solves the challenge with a cluster and migrating VMs across fixed machines!
Yes, That did the trick! Thanks! It´s kind of weird that my USB Zigbee stick is forwarded correctly to my Home Assistant VM. While my DVD reader can´t be forwarded to my Fedora VM. But for now I can live without the DVD-reader.
 
same error, revert it to previous kernel version. I hope they will fix it on next kernel.
 
Today get a new Kernel update:

Code:
pve-kernel (5.13.19-9) bullseye; urgency=medium

  * update to Ubuntu-5.13.0-30.33
    revert a problematic patch causing issues with releasing block devices

 -- Proxmox Support Team <support@proxmox.com>  Mon, 07 Feb 2022 11:01:14 +0100

I think it fix the problem. I will test it...
 
  • Like
Reactions: Jef Heselmans
Upgraded today, kernel is now on 5.13.19-4. One VM does not start anymore, of course the only one with USB assigned.
How to get 5.13.19-9, or a newwer one as this seems to fix the issue?
 
Klick on the server. And on Updates. in the Proxmox admin pannel.
Sorry for confusion, seems to be my mistake reading the numbers. In the first column it says 5.13.19-4, second column shows 5.13.19-9. So it seems to be fine.
But unfortunatelly I still have the issue with USB.

Code:
root@pve:~# dpkg -l | grep pve-kernel
ii  pve-firmware                         3.3-5                          all          Binary firmware code for the pve-kernel
ii  pve-kernel-5.13                      7.1-7                          all          Latest Proxmox VE Kernel Image
ii  pve-kernel-5.13.19-1-pve             5.13.19-3                      amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-5.13.19-2-pve             5.13.19-4                      amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-5.13.19-3-pve             5.13.19-7                      amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-5.13.19-4-pve             5.13.19-9                      amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-helper                    7.1-10                         all          Function for various kernel maintenance tasks.
 
HI, totally new to this "timeout waiting on systemd" thing. Today I made a snapshot and then a manually backup of a Windows VM (ID 229) (with PCIe passthrough) to a PBS server and the backup failed. After that the machine wont start again with error


Code:
UPID:h******3:00262C68:43728FE5:621E156D:qmstart:229:root@pam: 1 621E1572 timeout waiting on systemd


all other machines on this host are fine and I want to avoid restarting the whole PVE host.

Is there a way to regain a clear state to make the VM bootable again?

After following this thread I have read about "systemctl status qemu.slice" but could not find more common documentation; tried to adapt to my need and isssued:

Code:
root@***-**-**:~# systemctl status qemu.slice
● qemu.slice
     Loaded: loaded
     Active: active since Fri 2021-10-22 12:03:10 CEST; 4 months 8 days ago
      Tasks: 261
     Memory: 154.0G
        CPU: 1month 1w 5d 8h 31min 1.066s
     CGroup: /qemu.slice
             ├─102.scope
             │ └─1555295 /usr/bin/kvm -id 102 -name ***-**04 -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/102.qmp,server=on,wait=off -mon charde>
             ├─103.scope
             │ └─3934061 /usr/bin/kvm -id 103 -name ***-**-01 -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/103.qmp,server=on,wait=off -mon char>
             ├─104.scope
             │ └─2236807 /usr/bin/kvm -id 104 -name vpn20 -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/104.qmp,server=on,wait=off -mon chardev=q>
             ├─229.scope
             │ ├─1118318 swtpm socket --tpmstate backend-uri=file:///dev/rbd/ceph-ssd/vm-229-disk-1,mode=0600 --ctrl type=unixio,path=/var/run/qemu-server/229.sw>
             │ └─1118326 [kvm]
             ├─117.scope
             │ ├─4016959 swtpm socket --tpmstate backend-uri=file:///dev/rbd/ceph-ssd/vm-117-disk-1,mode=0600 --ctrl type=unixio,path=/var/run/qemu-server/117.sw>
             │ └─4016967 /usr/bin/kvm -id 117 -name ***-**-**-02 -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/117.qmp,server=on,wait=off -mon c>
             └─121.scope
               └─773695 /usr/bin/kvm -id 121 -name ***-SYS-**-02 -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/121.qmp,server=on,wait=off -mon ch>


Oct 31 08:59:23 ***-**-03 QEMU[1511538]: kvm: terminating on signal 15 from pid 2184 (/usr/sbin/qmeventd)
Oct 31 09:22:30 ***-**-03 QEMU[2759589]: kvm: terminating on signal 15 from pid 2184 (/usr/sbin/qmeventd)
Oct 31 17:20:04 ***-**-03 QEMU[1603138]: kvm: terminating on signal 15 from pid 2184 (/usr/sbin/qmeventd)
Oct 31 17:26:00 ***-**-03 QEMU[2637546]: kvm: terminating on signal 15 from pid 2184 (/usr/sbin/qmeventd)
Nov 01 08:57:45 ***-**-03 QEMU[2681515]: kvm: terminating on signal 15 from pid 2184 (/usr/sbin/qmeventd)
Nov 01 10:59:24 ***-**-03 QEMU[2650370]: kvm: terminating on signal 15 from pid 2184 (/usr/sbin/qmeventd)
Nov 02 12:41:32 ***-**-03 QEMU[462215]: kvm: terminating on signal 15 from pid 2184 (/usr/sbin/qmeventd)
Nov 05 14:49:42 ***-**-03 QEMU[517810]: kvm: terminating on signal 15 from pid 2184 (/usr/sbin/qmeventd)
Nov 14 16:43:51 ***-**-03 QEMU[4022511]: kvm: terminating on signal 15 from pid 2184 (/usr/sbin/qmeventd)
Nov 16 14:15:42 ***-**-03 QEMU[2488907]: kvm: terminating on signal 15 from pid 2184 (/usr/sbin/qmeventd)

and

Code:
root@***-***-*3:~# systemctl status 229.slice
● 229.slice
     Loaded: loaded
     Active: inactive (dead)
root@***-***-*3:~# systemctl status 229.scope
● 229.scope
     Loaded: loaded (/run/systemd/transient/229.scope; transient)
  Transient: yes
     Active: inactive (dead) since Tue 2022-03-01 13:06:21 CET; 47min ago
      Tasks: 59 (limit: 309274)
     Memory: 64.6G
        CPU: 1w 4d 9h 38min 20.291s
     CGroup: /qemu.slice/229.scope
             ├─1118318 swtpm socket --tpmstate backend-uri=file:///dev/rbd/ceph-ssd/vm-229-disk-1,mode=0600 --ctrl type=unixio,path=/var/run/qemu-server/229.swtp>
             └─1118326 [kvm]


is it safe (for the other running production VMs) to stop the .slice and .scope "things" like:


Code:
systemctl stop 229.scope
systemctl stop 229.slice


Or what could I try to start the VM 229 again?
Thanks for any help with that.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!