VM doesn't start Proxmox 6 - timeout waiting on systemd

Thank you for your prompt reply.

Currently 5.0.15-1.

Should we just update the kernel via the pve-no-subscription repo? Or would creating the vhost-net.conf file and running "update-initramfs -u" followed by "update-grub" be faster/safer?

We would like to avoid any downtime/restarts if at all possible.
 
Should we just update the kernel via the pve-no-subscription repo

Why not upgrade to pve-kernel-5.0.21-3-pve in version 5.0.21-7 , which would be available in pve-enterprise.

Or would creating the vhost-net.conf file and running "update-initramfs -u" followed by "update-grub" be faster/safer?

The both need the exact same (down)time, i.e., a reboot. We currently do not know of issues with pve-enterprise kernel, at least none which were not there in the 5.15 you're using :)

If you have a cluster you could maybe migrate those VMs away, to avoid their downtime, if your setup does not restricts this.
 
Why not upgrade to pve-kernel-5.0.21-3-pve in version 5.0.21-7 , which would be available in pve-enterprise.

Not sure what you mean by pve-kernel-5.0.21-3-pve "in" version 5.0.21-7. We're currently running proxmox-vc: 6.0-2

And for pve-enterprise, we would need to purchase support correct?

Ah, so a restart is required either way. Understood.

Yes, we can live migrate to our other kvm, that's a great idea. I'm just a little scared we could see the same issue during the migration, but hopefully not.

Lastly, I assume based on your recommendation that it's safe to update the kernel one at a time without breaking our cluster? We just have a two node cluster with ZFS, so we can live migrate and replicate.

Thanks you so much for your help. We will seriously consider support if cost is reasonable.
 
Read the Package Repository section...I'm convinced we'll at least get the Community subscription for our two hosts (four sockets).
 
Lastly, I assume based on your recommendation that it's safe to update the kernel one at a time without breaking our cluster? We just have a two node cluster with ZFS, so we can live migrate and replicate.

It won't break, but you will see a short time where the other node is not quorate (i.e., you won't be able to change, create, VMs), but that can be mitigated if problems arise.
Further you always can boot back into the old kernel, if there are issues with the new one in your specific setup, to quickly restore service.

If you only have two node's I'd recommend either to add a third one to increase redundancy and make testing updates or doing (hardware or software) maintenance work in general much easier and stress-free. If budget, datacenter space, or the like does not allow this then I'd suggest adding a external QDevice, see https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_corosync_external_vote_support to get at least some parts of those advantages.
 
Last edited:
  • Like
Reactions: stevensedory
I'm getting "TASK ERROR: timeout waiting on systemd" when trying to live migration one of the VMs (a few others migrated no problem).

Do you think this is related, and thus I'll just have to go offline tonight for the changes?
 
Why not upgrade to pve-kernel-5.0.21-3-pve in version 5.0.21-7 , which would be available in pve-enterprise.

So I dist-update'd to "pve-kernel-5.0.21-4-pve: 5.0.21-9" on non-subscription.

How would one "update" from this fully updated non-subscription, to enterprise? Just change the repo and run apt update and apt dist-upgrade? Or will I have to first downgrade?
 
Do you think this is related, and thus I'll just have to go offline tonight for the changes?

Yes, it highly probable is, and so a reboot is unavoidable. Albeit you could suspend the VMs so that they itself only sleep for a few minutes, and can be resumed in their old running state again a few minutes later.

How would one "update" from this fully updated non-subscription, to enterprise? Just change the repo and run apt update and apt dist-upgrade? Or will I have to first downgrade?

You would remove any no-subscription or pvetest repository entry and ensure that the one for the enterprise repository is configured (normally Proxmox VE ships one in /etc/apt/sources.list.d/pve-enterprise.list already).
Then simply ensure that the subscription is activated on the server (entry it over the webinterface) and you should be all set.
 
That said, isn't the no-subscription versions I'm running now higher then the subscription ones?

That's correct, at least most of the time.
So will apt dist-upgrade safely downgrade these to the "stable" subscription versions?
No, it won't, normally it's just enough when one then waits until enterprise repo caught up again, as frequent switching is not a common use case this works good enough. If there's really a issue with a specific package (set) you can always manually downgrade that one. For that one normally checks apt changelog <package> for the versions, and find the last known good one (most of the time the second from the list, as the first ist the currently installed one), then one can downgrade with: apt install <package>=<version>, where <package> is obiously replaced with the package and <version> is the one copied from the changelog. While that sounds a bit complicated it won't be required often, and one gets used to apt pretty soon, IMO :)
 
Hi,
on a System with secont-to-last or the Kernel previously we have the same Error: "VM doesn't start Proxmox 6 - timeout waiting on systemd". We use the no-subscription repository.... (At present I can not check that).
Is this Problem solved in the current Kernel / Version?
 
secont-to-last or the Kernel previously
Please always post versions, as we cannot know which kernel was the last one your server was upgraded to :)

But yes, with the 5.3 we had people report that it solved their issue, and it it's generally recommeded by us to upgraded to the 5.3 kernel based series - the 5.0 kernel will maximal get one update, if at all.
 
Hi Thomas,
yes I know (but at present I have no remote connection)
I is like this Version:
Code:
root@prox:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.21-4-pve)
pve-manager: 6.0-11 (running version: 6.0-11/2140ef37)
pve-kernel-helper: 6.0-11
pve-kernel-5.0: 6.0-10
pve-kernel-5.0.21-4-pve: 5.0.21-8
pve-kernel-5.0.21-3-pve: 5.0.21-7
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-3
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-6
libpve-guest-common-perl: 3.0-2
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.0-9
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-8
pve-cluster: 6.0-7
pve-container: 3.0-10
pve-docs: 6.0-8
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-4
pve-ha-manager: 3.0-2
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.1-4
pve-xtermjs: 3.13.2-1
pve-zsync: 2.0-1
qemu-server: 6.0-13
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2

Is there something more to do, except of the `apt update && apt full-upgrade && reboot`
like described in previous posts? Somthing like
Code:
## Creating
/etc/modprobe.d/vhost-net.conf
## with the oneliner
## and
update-initramfs -u
regards,
maxprox
 
like described in previous posts? Somthing like
That was a work around for the 5.0 based kernel, it should not be required when using 5.3.

So no, as long as the upgrade is OK you just need that + reboot.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!