Proxmox VE 8.1 released!

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,727
223
We're very excited to announce the release 8.1 of Proxmox Virtual Environment! It's based on Debian 12.2 "Bookworm" but uses a newer Linux kernel 6.5, QEMU 8.1.2, and OpenZFS 2.2.0 (with stable fixes backported)

Here is a selection of the highlights of Proxmox VE 8.1
  • Debian 12.2 (“Bookworm”), but uses a newer Linux kernel 6.5 as stable default
  • latest versions of QEMU 8.1.2 and ZFS 2.2.0 including the most important bugfixes from 2.2.1 already
  • Software-defined Networking (SDN)
  • Secure Boot
  • New flexible notification system with matcher-based approach
  • Ceph Server: Ceph Reef 18.2.0 is default, and Ceph Quincy 17.2.7 comes with continued support.
  • Countless GUI and API improvements.
Edit: updated ISO release 2 with current package set including updated kernel and ZFS 2.2.2, on 07. February 2024.

As always, we have included countless bugfixes and improvements on many places; see the release notes for all details.

Release notes
https://pve.proxmox.com/wiki/Roadmap

Press release
https://www.proxmox.com/en/news/press-releases/

Video tutorial
https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-8-1

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
https://enterprise.proxmox.com/iso

Documentation
https://pve.proxmox.com/pve-docs

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

There has been a lot of feedback from our community members and customers, and many of you reported bugs, submitted patches and were involved in testing - THANK YOU for your support!

FAQ

Q: Can I upgrade latest Proxmox VE 7 to 8 with apt?
A: Yes, please follow the upgrade instructions on https://pve.proxmox.com/wiki/Upgrade_from_7_to_8

Q: Can I upgrade an 8.0 installation to the stable 8.1 via apt?
A: Yes, upgrading from is possible via apt and GUI.

Q: Can I install Proxmox VE 8.1 on top of Debian 12 "Bookworm"?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm

Q: How can I upgrade my Proxmox VE 7.4 cluster with Ceph Pacific to Proxmox VE 8.1 and to Ceph Reef?
A: Upgrading involves a three-step process. First, upgrade Ceph from Pacific to Quincy. Next, upgrade Proxmox VE from version 7.4 to 8.1. Finally, once running Proxmox VE 8.1, upgrade Ceph to Reef. This process involves significant improvements and changes, so it is crucial to follow the upgrade documentation exactly. For detailed instructions, please visit:
  1. Ceph Pacific to Quincy Upgrade Guide
  2. Upgrading from Proxmox VE 7 to 8
  3. Ceph Quincy to Reef Upgrade Guide

Q: Where can I get more information about feature updates?
A: Check the roadmap, forum, the mailing list, and/or subscribe to our newsletter.

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
Last edited by a moderator:
Thanks! Upgrade is running, cant wait to try out! Love your work!
 
  • Like
Reactions: t.lamprecht
ZFS 2.2.1 already released.
Yes, and as stated in the above release overview we include all important fixes already a week ago.
We explicitly decided against a last-minute update to the full 2.2.1 release as it includes also some new features and changes that were simply not comfortable rushing out, and as mentioned, all important fixes are already in our ZFS build anyway.
 
Hello, Tried to install now the 8.1-1 version but the problem on freezing during install with Nvidia video cards remain the same !!
Any comment and suggestion about this will be appreciated.
 
ProxMox support, developers, Martin, et alia:

THANK YOU for another great upgrade.
(Yes, I am yelling it, because everyone at ProxMox deserves to be yelled at in this way with vigor and appreciation).

I just conducted an upgrade of my cluster and all is looking good. I also wanted to make sure to say that in no uncertain terms I appreciate all the qualitative hard work that everyone has done (including the stellar support for non commercial users in the forum).

I wish everyone a happy, safe, healthy, and relaxing holiday season.

Stuart
 
  • Like
Reactions: Gilberto Ferreira
Nice job folks.
However I would like to point out some features that, IMHO, should be there in the web gui:
1 - ZFS arc max size: I wonder why appears only in the installer
2 - That little green dot aside of NIC, to show us the link state of the NIC, should appear in the web gui as well, along network section! It would nice.

Oh! I still expect the options to import disk from WEB GUI! It's pretty annoying using the CLI to do that. (IMHO at least!)

That's it for now.
I will back later for more insights.
Thanks a lot for this wonderful re
 
Last edited:
1 - ZFS arc max size: I wonder why appears only in the installer
As answered on your post to the mailing list a few weeks ago:
We would need to either have a separate config entry, that can easily
get out of sync with the actual configured values, or parse all possible
ways to set this, e.g., modprobe configs, which isn't too nice.

With a few trade-offs/limitations one could probably work around that,
but IMO it can be a bit too much hassle for something that one normally
changes only once or twice (or once this change is in, probably never
for new setups), so for now I'd keep this manual.
So not planned for the foreseeable future.
That little green dot aside of NIC, to show us the link state of the NIC, should appear in the web gui as well, along network section! It would nice.
Yes, that could be nice. Albeit we'd need to juggle a bit with the API (mixing config with state), but feel free to open an enhancement request over at our Bugzilla: https://bugzilla.proxmox.com/
 
The changelog mentions the following:
  • Ceph Reef is now supported and the default for new installations.
    • Reworked defaults brings improved performance and increased reading speed out of the box, with less tuning required.
Do I automatically get to enjoy the "Reworked defaults" after upgrading to Reef or how would I apply these on existing Ceph Clusters (as the point above is about new installations)?
 
The changelog mentions the following:

Do I automatically get to enjoy the "Reworked defaults" after upgrading to Reef or how would I apply these on existing Ceph Clusters (as the point above is about new installations)?
We did a very quick internal benchmark comparison a few weeks ago and saw that writes got a bit faster in reef, compared to quincy and reads got a lot faster.Those are changed defaults within Ceph and will be used in reef -> an update should suffice.
(as the point above is about new installations)?
That means that a new Ceph install in a cluster that has not installed Ceph yet, will preselect Reef in the GUI as the version to install. Though you can of course still switch to Quincy.
 
Problems on install

Code:
Building module:
Cleaning build area...
make -j8 KERNELRELEASE=6.5.11-4-pve -C /lib/modules/6.5.11-4-pve/build M=/var/lib/dkms/r8168/8.051.02/build........(bad exit status: 2)
Error! Bad return status for module build on kernel: 6.5.11-4-pve (x86_64)
Consult /var/lib/dkms/r8168/8.051.02/build/make.log for more information.
Error! One or more modules failed to install during autoinstall.
Refer to previous errors for more information.
dkms: autoinstall for kernel: 6.5.11-4-pve failed!
run-parts: /etc/kernel/postinst.d/dkms exited with return code 11
Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/proxmox-kernel-6.5.11-4-pve-signed.postinst line 20.
dpkg: error processing package proxmox-kernel-6.5.11-4-pve-signed (--configure):
 installed proxmox-kernel-6.5.11-4-pve-signed package post-installation script subprocess returned error exit status 2
dpkg: dependency problems prevent configuration of proxmox-kernel-6.5:
 proxmox-kernel-6.5 depends on proxmox-kernel-6.5.11-4-pve-signed | proxmox-kernel-6.5.11-4-pve; however:
  Package proxmox-kernel-6.5.11-4-pve-signed is not configured yet.
  Package proxmox-kernel-6.5.11-4-pve is not installed.

Fixed!!
I have returned to the previous module r8169 that generated errors but apparently is now solved
 
Last edited:
  • Like
Reactions: maleko48
Congratulations on SDN moving to supported!
After upgrading from 8.0.4 (fresh install, no custom setup) to 8.1.3 and configuring SDN i get the error:

WARN: missing 'source /etc/network/interfaces.d/sdn' directive for SDN support!

With SDN moving to supported, shouldn't this be done automatically?
 
WARN: missing 'source /etc/network/interfaces.d/sdn' directive for SDN support!

With SDN moving to supported, shouldn't this be done automatically?
This snipped is present out-of-the-box for new installations, but those upgrading from older ones might still need to do some adaptions, as documented:
https://pve.proxmox.com/pve-docs/chapter-pvesdn.html#pvesdn_installation

For older installations we warn as we did not yet make the SDN support a hard-dependency as we're trying to avoid interruption for existing services. With further stabilization, e.g., of the new DHCP IPAM feature in a next point release, we might switch that and then also enforce that new snippet-inclusion line for systems that upgrade from older versions.
 
Last edited:
  • Like
Reactions: as8net

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!