Proxmox VE 8.0 released!

Will try that when my ACPI shutdown won't work anymore.

By the way...shouldn't that job finish with a warning instead of a OK, if a hard stop was required because the shutdown failed?
 
Last edited:
Will try that when my ACPI shutdown won't work anymore.

By the way...shouldn't that job finish with a warning instead of a OK, if a hard stop was required because the shutdown failed?
Tbh, here if an acpi/qemu shutdown fails, it simply times out and doesn't stops the VM.
I just get an error, something with failed to shutdown, timed out. (After around 2minutes)
Then i can either console into the vm and do an "shutdown now", or hardstop myself.

I think the only moment it does actually a hard stop, is when i shutdown or reboot the whole node and the vm is still running on it.
And the vm isn't configured as acpi or has no gemu agent.
It waits then for the shutdown for 2 minutes and if that times out, i think it will hardstop the vm.

But by default at least it just times out with doing nothing here.
 
Tbh, here if an acpi/qemu shutdown fails, it simply times out and doesn't stops the VM.
I just get an error, something with failed to shutdown, timed out. (After around 2minutes)
Then i can either console into the vm and do an "shutdown now", or hardstop myself.

I think the only moment it does actually a hard stop, is when i shutdown or reboot the whole node and the vm is still running on it.
And the vm isn't configured as acpi or has no gemu agent.
It waits then for the shutdown for 2 minutes and if that times out, i think it will hardstop the vm.

But by default at least it just times out with doing nothing here.
With PVE 7.4 it shutdown that TrueNAS VM so that it stopped before the timeout triggered.

But now with PVE8 it looks like the shutdown task just kills the VM without actually waiting for the GuestOS to be propery shutdown. :/

As a workaround I could use the TrueNAS API or TrueNAS webUI to shutdown the VM from within the guest OS but would still be problematic in case NUT triggers a shutdown of all the servers so PVE will try to shutdown the VM on its own.

In the past, I already had a problem shutting down a FreeBSD VM. But not sure if it was my OPNsense VMs or the TrueNAS VMs. After triggering the shutdown task I could see in the guests console that the guestOS was shutting down and stopping services but then it got stuck and the shutdown task failed with the VM still running but not responing (because the guest OS already had shutdown nearly everything) so I had to run a stop task to stop it (which was fine as as the filesystems where unmounted then).

But the last months there were no shutdown problems at all.
 
Last edited:
With PVE 7.4 it shutdown that TrueNAS VM so that it stopped before the timeout triggered.

But now with PVE8 it looks like the shutdown task just kills the VM without actually waiting for the GuestOS to be propery shutdown. :/

As a workaround I could use the TrueNAS API or TrueNAS webUI to shutdown the VM from within the guest OS but would still be problematic in case NUT triggers a shutdown of all the server so PVE will try to shutdown the VM on its own.

In the past I already had a problem where sometimes it tried a shutdown a VM. But not sure if it was my OPNsense VMs or the TrueNAS VMs. After the shutdown task I could see in the guests console that the guestOS was shutting down and stopping services but then it got stuck and the shutdown task failed with the VM still running but not responing (because the guest OS already had shutdown nearly everything) so I had to run a stop task to stop it (which was fine as as the filesystems where unmounted then).

But the last months there were no shutdown problems at all.
Im a bit confused, of the fact what acpi or qemu-guest-agent should actually do by default.
Just readed some older posts and even the proxmox acpi docs.

In the docs they write that an acpi shutdown tryes some api calls and then stops the vm if nothing happens.
Same thing in the "Shutdown timeout" description, they mention there 180s by default.
--> i don't know where to change the default value of 180 seconds (probably with some cli command)

But im confused why your "Shutdown timeout" is set to 15s

However, as far i understand the default behavior is then:
- When you shutdown an VM through the Pve-Gui or via "qm shutdown xxx"
--> It times out after 180s and nothing will happen, vm will still run
- When you shutdown the pve node, if it timeouts after 180s, the VM will Force Stopped.

---

I think pve detects if the VM is properly shutdown'ed if the VM turned off itself (poweroff)
- dunno if an "halt" is enough, idk, but i had in the past some computers that shutted down, but didn't turned off.
Maybe sth like that is happening with Truenas?


"As a workaround I could use the TrueNAS API or TrueNAS webUI to shutdown the VM from within the guest"
--> This is not a workaround, at least it wouldn't be for me :)

Isn't a better workaround to use the qemu guest agent, that i linked in the post before?
It worked really great here without any issues + provides ip adresses in the pve GUI:)

But i would first increase that "Shutdown timeout" from 15s to 30s at least....
Maybe on 7.2 the 15s were enough, but now with 8.0 the shutdown + shutdown detection takes now simply 16 seconds?

Cheers
 
Same thing in the "Shutdown timeout" description, they mention there 180s by default.
--> i don't know where to change the default value of 180 seconds (probably with some cli command)
Each VM got a "shutdown timeout" option in the webUI at VM -> Option -> Star/Shutdown Order. But here that is everywhere set to default so it should be the default 180 seconds and not kill it after 15 seconds: https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_startup_and_shutdown
 
Each VM got a "shutdown timeout" option in the webUI at VM -> Option -> Star/Shutdown Order. But here that is everywhere set to default so it should be the default 180 seconds and not kill it after 15 seconds: https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_startup_and_shutdown
Exactly that's what i meant.

Im going to install truenas core again on a node which i can restart etc and no one needs xD
Then disable acpi and not install qemu-guest-agent.
Just to check what happens here if i restart the node.
If it waits 180s or 15s at least before terminating:-)
 
No @Dunuin sry.

But i tryed this time without qemu guest agent, just the normal acpi shutdown, that works without issues....

However, i simply enabled in the VM Options in pve -> QEMU guest agent: Enabled
Without qemu-guest-agent having installed in truenas core...
That was actually enough to make the "shutdown" not working, so that pve has to force stop the VM.
Without having "Shutdown timeout" defined in VM Options, it timeouts after around 180s...
Not 15s here, sorry....

Sorry for not beeing a help, but it looks like the issue is somewhere on your side this time.

But this time at least, i got your message xD
Bildschirmfoto 2023-07-20 um 19.54.48.png

Just that the message is expected and i triggered it manually, with making shutdown not working and rebooting the node.... :-(

EDIT: You have to find out where your 15s comes from.
Probably check if it happens with any other vms?
simply disable qemu guest agent inside the vm and enable qemu guest agent in pve in the vm options. should be enough to check if it force stops after 15s on other vms either.
 
Last edited:
  • Like
Reactions: Dunuin
I am hoping this is being posted in the right place, not sure if I should have started a new thread or not. I have a question about resource mapping and how it relates to USB mapping, I currently have a VM that has a number of USB devices mapped using the new mapping as It allows me to assign a more memorable name to the hardware list for the VM's configuration however the issue is if I disconnect the device I cannot start the VM but if I have it added to the VM without mapping the VM will start with the device missing. Is this something that will be updated later or would this be the new expected behaviour moving forward? It is not a huge deal as I just have to make sure my thumb drive is always connected when I start the VM and then can connect and disconnect without issue, just sort of a pain more than anything. This also prevents me from mapping my mouse and keyboard as they are on a KVM switch that is using USB pass-through to add it to 2 separate VMs with a GPU as if I use the mapping the VMs will not start unless the monitor is on that input so that the mouse and keyboard exist.
 
Updated twp clusters from 7.x to 8.x this weekend. The only hiccup was the "production" repository license on the test cluster when doing the apt update command. I run mostly LXC containers and playing the migration game between hosts was the most fun I had in a while. This in place upgrade was the best that I have done with Linux in a while and the Proxmox team sure did an excellent job to make sure this was as flawless as possible.
 
Hello.
I have updated my test cluster ( 5 nodes). All work is fine, but on few nodes I see a some errors in dmesg:

Code:
[79000.018843] pverados[1002335]: segfault at 55a429fac030 ip 000055a429fac030 sp 00007ffd0bdeb2c8 error 14 in perl[55a429f80000+195000] likely on CPU 0 (core 0, socket 0)
[79000.018858] Code: Unable to access opcode bytes at 0x55a429fac006.
[112691.083445] pverados[1426585]: segfault at 55a429fac030 ip 000055a429fac030 sp 00007ffd0bdeb2c8 error 14 in perl[55a429f80000+195000] likely on CPU 3 (core 3, socket 0)
[112691.083459] Code: Unable to access opcode bytes at 0x55a429fac006.
[115751.124815] pverados[1464845]: segfault at 55a429fac030 ip 000055a429fac030 sp 00007ffd0bdeb2c8 error 14 in perl[55a429f80000+195000] likely on CPU 22 (core 6, socket 0)
[115751.124830] Code: Unable to access opcode bytes at 0x55a429fac006.
[116981.038112] pverados[1480841]: segfault at 55a429fac030 ip 000055a429fac030 sp 00007ffd0bdeb2c8 error 14 in perl[55a429f80000+195000] likely on CPU 3 (core 3, socket 0)
[116981.038126] Code: Unable to access opcode bytes at 0x55a429fac006.
[118159.853135] pverados[1495409]: segfault at 55a429fac030 ip 000055a429fac030 sp 00007ffd0bdeb2c8 error 14 likely on CPU 27 (core 11, socket 0)
[118159.853145] Code: Unable to access opcode bytes at 0x55a429fac006.
[126120.960397] pverados[1596000]: segfault at 55a429fac030 ip 000055a429fac030 sp 00007ffd0bdeb2c8 error 14 in perl[55a429f80000+195000] likely on CPU 24 (core 8, socket 0)
[126120.960412] Code: Unable to access opcode bytes at 0x55a429fac006.
[131860.866128] pverados[1668760]: segfault at 55a429fac030 ip 000055a429fac030 sp 00007ffd0bdeb2c8 error 14 in perl[55a429f80000+195000] likely on CPU 29 (core 13, socket 0)
[131860.866145] Code: Unable to access opcode bytes at 0x55a429fac006.

Code:
[146685.756874] pverados[1858147]: segfault at 55fa0ab86e90 ip 000055fa07a3609d sp 00007fff61dbff30 error 7 in perl[55fa0795b000+195000] likely on CPU 14 (core 14, socket 0)
[146685.756886] Code: 0f 95 c2 c1 e2 05 08 55 00 41 83 47 08 01 48 8b 53 08 22 42 23 0f b6 c0 66 89 45 02 49 8b 07 8b 78 60 48 8b 70 48 44 8d 6f 01 <44> 89 68 60 41 83 fd 01 0f 8f 4d 04 00 00 48 8b 56 08 49 63 c5 48
[147674.484918] pverados[1870446]: segfault at 55fa08306910 ip 000055fa07a43c36 sp 00007fff61dc01a0 error 7 in perl[55fa0795b000+195000] likely on CPU 3 (core 3, socket 0)
[147674.484931] Code: 01 08 49 89 c6 e8 8a b1 02 00 48 8b 85 e0 00 00 00 4c 8b 44 24 08 48 8b 40 18 48 85 c0 0f 84 e3 02 00 00 48 8b 15 a2 52 21 00 <c7> 40 20 ff ff ff ff 66 48 0f 6e c8 48 89 50 28 48 8b 10 48 8b 12
[149555.347900] pverados[1894875]: segfault at 55fa0ab86e90 ip 000055fa07a3609d sp 00007fff61dbff30 error 7 in perl[55fa0795b000+195000] likely on CPU 16 (core 0, socket 0)
[149555.347913] Code: 0f 95 c2 c1 e2 05 08 55 00 41 83 47 08 01 48 8b 53 08 22 42 23 0f b6 c0 66 89 45 02 49 8b 07 8b 78 60 48 8b 70 48 44 8d 6f 01 <44> 89 68 60 41 83 fd 01 0f 8f 4d 04 00 00 48 8b 56 08 49 63 c5 48
[153775.382424] pverados[1947540]: segfault at 55fa08306910 ip 000055fa07a43c36 sp 00007fff61dc01a0 error 7 in perl[55fa0795b000+195000] likely on CPU 15 (core 15, socket 0)
[153775.382438] Code: 01 08 49 89 c6 e8 8a b1 02 00 48 8b 85 e0 00 00 00 4c 8b 44 24 08 48 8b 40 18 48 85 c0 0f 84 e3 02 00 00 48 8b 15 a2 52 21 00 <c7> 40 20 ff ff ff ff 66 48 0f 6e c8 48 89 50 28 48 8b 10 48 8b 12

Anyone have it? How to solve it?
Thanks.
 
Hi!

Is it possible to make the "-dbg" version of the kernels available?

Code:
$> aptitude search linux-image | grep -i pve

v   linux-image-6.2.16-1-pve-amd64                                  -
v   linux-image-6.2.16-2-pve-amd64                                  -
v   linux-image-6.2.16-3-pve-amd64                                  -
v   linux-image-6.2.16-4-pve-amd64                                  -
v   linux-image-6.2.16-5-pve-amd64                                  -

Example ( Base Debian ):
Code:
$> aptitude search linux-image-6.1.0-9

p   linux-image-6.1.0-9-amd64                                       - Linux 6.1 for 64-bit PCs (signed)                                        
p   linux-image-6.1.0-9-amd64-dbg                                   - Debug symbols for linux-image-6.1.0-9-amd64
 
Last edited:
Hi!

Is it possible to make the "-dbg" version of the kernels available?

Code:
$> aptitude search linux-image | grep -i pve

v   linux-image-6.2.16-1-pve-amd64                                  -
v   linux-image-6.2.16-2-pve-amd64                                  -
v   linux-image-6.2.16-3-pve-amd64                                  -
v   linux-image-6.2.16-4-pve-amd64                                  -
v   linux-image-6.2.16-5-pve-amd64                                  -

Example ( Base Debian ):
Code:
$> aptitude search linux-image-6.1.0-9

p   linux-image-6.1.0-9-amd64                                       - Linux 6.1 for 64-bit PCs (signed)                                     
p   linux-image-6.1.0-9-amd64-dbg                                   - Debug symbols for linux-image-6.1.0-9-amd64

they are huge, which is why we don't ship them in our repository. you can build one yourself though if you need it to debug some particular issue:

https://git.proxmox.com/?p=pve-kernel.git;a=blob;f=README;hb=HEAD#l113
 
Last edited:
Hi, I just installed ProxMox 8.0.2 with ISO using legacy BIOS, and root filesystem on ZFS mirror, after installed, I change BIOS to UEFI, but ProxMox boot failed, it show the message: Not bootable device ...
 
Last edited:
I change BIOS to UEFI, but ProxMox boot failed, it show the message: Not bootable device ...
Yeah, if you install in BIOS mode then UEFI boot won't be set up and it cannot work to switch later without any interaction. E.g., if UEFI interface isn't available we cannot register a boot entry to the EFIvars.

Why don't you install also while booted in UEFI mode?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!