Proxmox VE 8.0 released!

Hi Fiona,

No, I am not running the very latest BIOS for my motherboard. My current BIOS is from August 2022 and has the 'Update to AGESA ComboAm4v2PI 1.2.0.7.' patches. I thought this patch was meant to fix these problems. Your link does not mention what AGESA patch is necessary to fix this but I have found: https://www.phoronix.com/news/AMD-Linux-Stuttering-Fix-fTPM

I am reluctant to upgrade the BIOS since it means reconfiguring everything in the BIOS from scratch if it is not absolutley necessary. There are so many settings and I am also worried that they might change my IOMMU groupings which is very important to me.

If it helps, I am running an MSI PRESTIGE-X570-CREATION motherboard with the 7C36v1I BIOS.

Do you think that newer BIOS updates are going to fix this problem? Is there a way to find out what check the 6.1 kernel does to determine if the hardware is faulty? Could this check be disabled?

Thanks for your help,

Jonathan
 
Hi Fiona,

No, I am not running the very latest BIOS for my motherboard. My current BIOS is from August 2022 and has the 'Update to AGESA ComboAm4v2PI 1.2.0.7.' patches. I thought this patch was meant to fix these problems. Your link does not mention what AGESA patch is necessary to fix this but I have found: https://www.phoronix.com/news/AMD-Linux-Stuttering-Fix-fTPM

I am reluctant to upgrade the BIOS since it means reconfiguring everything in the BIOS from scratch if it is not absolutley necessary. There are so many settings and I am also worried that they might change my IOMMU groupings which is very important to me.

If it helps, I am running an MSI PRESTIGE-X570-CREATION motherboard with the 7C36v1I BIOS.

Do you think that newer BIOS updates are going to fix this problem? Is there a way to find out what check the 6.1 kernel does to determine if the hardware is faulty? Could this check be disabled?

Thanks for your help,

Jonathan
Maybe this helps:
What i do Generally in my bios for my x570 board is:

Code:
Advanced/Chipset Configuration/Above 4G Decoding -> Enabled
Advanced/Chipset Configuration/SR-IOV Support -> Enabled
Advanced/Chipset Configuration/Re-Size BAR Support -> Auto
Advanced/Storage Configuration/SATA Mode -> AHCI
Advanced/Storage Configuration/SATA Hot Plug -> Enabled
Advanced/AMD PBS/Primary Graphics Adapter -> Onboard D-sub (depends if you're using Dgpu and want to passthrough)
Advanced/AMD PBS/PCIE Link Width -> Depends if you use a bifurcation card.... (This is just a hint)
Advanced/AMD CBS/CPU Common Options/Local APIC Mode -> (Depends if you need it, you can either use "Auto" or "x2APIC" mode)
Advanced/AMD CBS/NBIO Common Options/IOMMU -> Enabled
Advanced/AMD CBS/NBIO Common Options/DMAr Support-> Auto (If you set it to enable, it could happen that usb devices like keyboard won't work during Boot)
Advanced/AMD CBS/NBIO Common Options/ACS Enable -> Enable
Advanced/AMD CBS/NBIO Common Options/Enable AER Cap -> Enable

Security/Secure Boot/Secure Boot -> Disabled
Boot/CSM/CSM -> Disabled (But depends how you installed proxmox, i install always in uefi mode if possible, so i have it always disabled)

As a side node maybe.

Before you shut down, to update the bios, i would recommend to use the maintenancemode script from Darkhand81 on GitHub.

All the script will do, is disabling "onboot" for all vm's if you execute it and enable onboot again for all vm's once you execute it again.
That's very handy, because otherwise you have to turn off "onboot" for every vm's/ct's by hand and enable it by hand afterwards again.
The script simply "remembers" which containers/vms had onboot and which not etc...

Im mentioning it, because in the case that iommu mappings changed, you can manually check that in each VM.
Otherwise you boot into Proxmox again and all vms will start with probably the wrong Passthrough device.
(In case if mapping changes)

Cheers

EDIT: Before all that, i would maybe check if fTPM is even enabled in Bios xD
 
Last edited:
GKH is not all-knowing, pve uses ubuntu-backported kernels, so the will be supported atleast this year,maybe even more.
 
If i could vote, i wouldn't deny to switch to 6.3 xD
But on the other side, 6.3 is a short living kernel either, 6.4 either...
The next LTS kernel will be probably 6.6 or so.

This situation is very confusing, the only thing I don't understand is, why debian 12 was released with 6.2 and not 6.1 LTS.

Would be much easier for the Proxmox team at least to decide :)

However, i don't think it will change anything, since you're both right.
6.2 is eol, while still getting support/backports from Debian.

And i think for now it doesn't matters either, 6.2 could be our stable kernel for Proxmox 8.0, while we get at some point as opt-in 6.3 or 6.4/5.

I suspect very hard, that with Proxmox 8.1 we get openzfs 2.2 with some newer kernel as default xD
 
Debian doesn't have any 6.2 kernel.
Bookworm comes with 6.1.x
Aah you're right, i dunno why I thought that, maybe because of Ubuntu 23.04, which is not LTS anyway.

Then yeah, dunno what the Proxmox team is going to do.

You made my half post above meaningless with your comment xD
 
I wonder why there was the decision to use kernel 6.2 when 6.1 is LTS and 6.2 is already EOL.
Which features are required by Proxmox to use 6.2?
I cannot find a good answer to the lifecycle of Ubuntu 6.2 kernel but it looks like it's 9 months. With the 23.04 release, it's possible they give 7 more months.
I hope we get an answer as this doesn't feel well.
 
Maybe this helps:
What i do Generally in my bios for my x570 board is:

Code:
Advanced/Chipset Configuration/Above 4G Decoding -> Enabled
Advanced/Chipset Configuration/SR-IOV Support -> Enabled
Advanced/Chipset Configuration/Re-Size BAR Support -> Auto
Advanced/Storage Configuration/SATA Mode -> AHCI
Advanced/Storage Configuration/SATA Hot Plug -> Enabled
Advanced/AMD PBS/Primary Graphics Adapter -> Onboard D-sub (depends if you're using Dgpu and want to passthrough)
Advanced/AMD PBS/PCIE Link Width -> Depends if you use a bifurcation card.... (This is just a hint)
Advanced/AMD CBS/CPU Common Options/Local APIC Mode -> (Depends if you need it, you can either use "Auto" or "x2APIC" mode)
Advanced/AMD CBS/NBIO Common Options/IOMMU -> Enabled
Advanced/AMD CBS/NBIO Common Options/DMAr Support-> Auto (If you set it to enable, it could happen that usb devices like keyboard won't work during Boot)
Advanced/AMD CBS/NBIO Common Options/ACS Enable -> Enable
Advanced/AMD CBS/NBIO Common Options/Enable AER Cap -> Enable

Security/Secure Boot/Secure Boot -> Disabled
Boot/CSM/CSM -> Disabled (But depends how you installed proxmox, i install always in uefi mode if possible, so i have it always disabled)

As a side node maybe.

Before you shut down, to update the bios, i would recommend to use the maintenancemode script from Darkhand81 on GitHub.

All the script will do, is disabling "onboot" for all vm's if you execute it and enable onboot again for all vm's once you execute it again.
That's very handy, because otherwise you have to turn off "onboot" for every vm's/ct's by hand and enable it by hand afterwards again.
The script simply "remembers" which containers/vms had onboot and which not etc...

Im mentioning it, because in the case that iommu mappings changed, you can manually check that in each VM.
Otherwise you boot into Proxmox again and all vms will start with probably the wrong Passthrough device.
(In case if mapping changes)

Cheers

EDIT: Before all that, i would maybe check if fTPM is even enabled in Bios xD
Hi Ramalama,

Thanks very much for the pointer to the maint_mode script - very useful! I often want to be able to do this.

As you have an x570 motherboard does /dev/hwrng work for you? It would be useful to know what board you have and what AGESA version?

Kind regards,

Jonathan
 
Hi Ramalama,

Thanks very much for the pointer to the maint_mode script - very useful! I often want to be able to do this.

As you have an x570 motherboard does /dev/hwrng work for you? It would be useful to know what board you have and what AGESA version?

Kind regards,

Jonathan

Hey, np!

cat /dev/hwrng
Works, but it takes like 5 seconds till it outputs crap...

cat /dev/random otherwise outputs instantly so much that ctrl+c is almost unable to stop it xD

I have an x570d4i-2t from asrock rack with agesa 1.2.0.7 (latest bios).
We have already agesa 1.2.0.a which fixes every bios bug amd had, but that didn't arrived here, since ASRock Rack is extremely slow with providing bios updates

We cannot really compare, because you are using fTPM, which is emulated by the CPU and i have a real LPC TPM chip on my board.
(Tho i think i can switch to fTPM in bios for testing, but shutting down everything/rebooting etc is a pain)

But if you really want i could check with fTPM, if that doesn't hopefully mess something up in proxmox.
Since the tpm module has it's own eeprom storage, for storing keys.
Dunno if there are any keys and if switching between fTPM and LPC TPM, messes something up on Linux/Proxmox.

Hopefully someone else has a clue if i can switch between without issues, then i can test it for sure!

Cheers
 
But why? Is a stable Debian Kernel not the better choice?
LTS kernel tree's aren't really more stable these days, all get backports from mainline where applicable, applying them to other kernel tree's isn't a problem and thus 6.2 isn't EOL from neither our nor Ubuntu's side.

FWIW, we often backport security fixes much quicker than Debian, which normally only releases new kernel versions for their point releases every few months; that alone would be way to slow for us.
Besides that, we integrate ZFS directly and have Proxmox VE specific patches like for PCI passthrough, making Linux bridges MAC assignment actually stable, ...
The choice for using Ubuntu kernel as base is mostly stemming from 1) it's basically the upstream for the AppArmor subsystem, which our Containers rely on b) joined force with more eyes on one kernel ensures covering more ground; we can take in backports from them and in turn send the occasional patch series back, e.g., if investigation in some enterprise support case found a bug or the like.

https://www.bleepingcomputer.com/ne...inux-kernel-flaw-allows-privilege-escalation/

These patches were subsequently backported to stable kernels (6.1.37, 6.3.11, and 6.4.1)

No mention of 6.2 as it is EOL.
So that's why I ask about Proxmox and 6.2
We saw this and looked into it, as no public exploit is available and as the authored explicitly mentions that "exploiting this vulnerability is considered challenging", with that we saw no need for immediate rush and rather waited until the dust settled to avoid that a security fix for a theoretical issue is causing more harm than use. We're still already testing a newer 6.2 kernel with the respective patches back ported which will hit the repos later today, if nothing turns up in QA.

Edit: Available on pvetest repo.
 
Last edited:
o.k. so we tried to upgrade a latest PVE 7 to 8 with bullseye to bookworm.

Box is a hpe proliant, Xeon, 16GB RAM, 4x 4TB HD min. 50% free, was last set up a couple of month ago, worked fine, all updates installed.

Duly followed https://pve.proxmox.com/wiki/Upgrade_from_7_to_8

pve7to8 --full gave o.k.
Updated all Deb and PVE repository to Bookworm and subsequently did apt update && apt dist-upgrade and reboot

after 1st reboot:
cannot connect to VEs (obviously didn't start)
501 on GUI port 8006
ssh to host works
pct start gave:

:~# pct start 100
run_buffer: 322 Script exited with status 1
lxc_setup: 4437 Failed to run mount hooks
do_start: 1272 Failed to setup container "100"
sync_wait: 34 An error occurred in another process (expected sequence number 4)
__lxc_start: 2107 Failed to spawn container "100"
startup for container '100' failed

after 2nd reboot:
box ist totally unaccessible via http and ssh
on local console we see a Debian login screen (not the usual "fancy" one from PVE with hint to port 8006, just plain basic Debian)
login as root
top shows some PVE processes running (pvestats, pve-firewall, pva-ha-crm, etc.)
pct start 100 gives "bridge vmbr0 does not exist"
networking is down and cannot be started
/etc/network/interfaces looks good (= like before, like on the other boxes on site)
ifup and ifdown give "permission denied"

That's what I call an upgrade perfectly gone wrong :)

Unless anybody has a proposal, we will setup a phresh PVE 7 and add our containers from backup.

Wanted to post the full log here to help tracking the issue and maybe prevent it from happening to others, but the log is too long; find it at https://pastebin.com/Ceabm8hV

Cheers,
~R.
 
on local console we see a Debian login screen (not the usual "fancy" one from PVE with hint to port 8006, just plain basic Debian)
this is proably due to you selecting the 'maintainers version' when the upgrade asked you what to do with the changes in /etc/issue (although it should get regenerated upon boot in a correctly setup PVE node...)

also you should also check for errors after running commands from the upgrade guide:
Code:
root@spitfire:~# sed -i -e 's/bullseye/bookworm/g' /etc/apt/sources.list.d/pve-install-repo.list
sed: can't read /etc/apt/sources.list.d/pve-install-repo.list: No such file or directory
(from the pastebin)

I guess this resulted in you not having any bookworm apt sources for Proxmox VE...


Wanted to post the full log here to help tracking the issue and maybe prevent it from happening to others, but the log is too long; find it at https://pastebin.com/Ceabm8hV
sadly this is cut-off at some point (all of the log looks right at a quick glance - so I expect that whatever causes the issues happened after your ssh-client disconnected from the system) - I hope you reconnected somehow and continued with the upgrade - since the log ends somewhere mid-unpacking?

in any case - check the journal on the node for errors that might explain what went wrong,
try configuring appropriate apt-sources for bookworm for PVE (pve-enterprise or pve-no-subscription) - see:

https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_package_repositories

then run: `apt update; apt dist-upgrade`

If this does not fix your issue - please open a new thread in the forum with the errors you ran into.
 
Thank you Stoiko!

yes, the update got interrupted at some point (broken pipe, dunno why/how - I guessed due to the network that went lost during/due to update?)

apt sources are/were full bookworm (edited manually).

Thanks for the hint to do an "apt update" again though!

It gave an error and said to do "dpkg ---configure -a"
did that and upgrade continued and finished.
Network still unreachable.
After reboot network is up
"fancy" PVE login on local screen
Log in via ssh
apt update && apt dist-upgrade go through w./o. any error
GUI is up
all VEs up, running and accessible
reboot again, just to be sure
all good :)

So the problem obviously was, that the update was interrupted and consequently could not finish properly.
Everything works now and we are proudly running PVE 8.0.3 :)

Thank you guys and keep up your GREAT work!

Cheers,
~R.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!