Proxmox VE 8.4 released!

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
6,517
3,810
303
South Tyrol/Italy
shop.proxmox.com
We are excited to announce that our latest software version 8.4 for Proxmox Virtual Environment is now available for download. This release is based on Debian 12.10 "Bookworm" but uses the Linux kernel 6.8.12 and kernel 6.14 as opt-in, QEMU 9.2.0, LXC 6.0.0, ZFS 2.2.7 with compatibility patches for kernel 6.14, and Ceph Squid 19.2.1 as stable option.

Proxmox VE 8.4 includes the following highlights
  • Live migration with mediated devices
  • API for third party backup solutions
  • Virtiofs directory passthrough
  • and much more

As always, we have included countless bugfixes and improvements in many places; see the release notes for all details.

Release notes
https://pve.proxmox.com/wiki/Roadmap

Press release
https://www.proxmox.com/en/news/press-releases

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
https://enterprise.proxmox.com/iso

Documentation
https://pve.proxmox.com/pve-docs

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

There has been a lot of feedback from our community members and customers, and many of you reported bugs, submitted patches and were involved in testing - THANK YOU for your support!

FAQ
Q: Can I upgrade latest Proxmox VE 7 to 8 with apt?
A: Yes, please follow the upgrade instructions on https://pve.proxmox.com/wiki/Upgrade_from_7_to_8

Q: Can I upgrade an 8.0 installation to the stable 8.4 via apt?
A: Yes, upgrading from is possible via apt and GUI.

Q: Can I install Proxmox VE 8.4 on top of Debian 12 "Bookworm"?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm

Q: Can I upgrade from with Ceph Reef to Ceph Squid?
A: Yes, see https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid

Q: Can I upgrade my Proxmox VE 7.4 cluster with Ceph Pacific to Proxmox VE 8.4 and to Ceph Reef?
A: This is a three-step process. First, you have to upgrade Ceph from Pacific to Quincy, and afterwards you can then upgrade Proxmox VE from 7.4 to 8.4. As soon as you run Proxmox VE 8.4, you can upgrade Ceph to Reef. There are a lot of improvements and changes, so please follow exactly the upgrade documentation:
https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy
https://pve.proxmox.com/wiki/Upgrade_from_7_to_8
https://pve.proxmox.com/wiki/Ceph_Quincy_to_Reef

Q: Where can I get more information about feature updates?
A: Check the roadmap, forum, the mailing list, and/or subscribe to our newsletter.
 
Would have loved to see ZFS 2.3.x as I have a few cases were raid-z expansion would be useful.
As that ZFS release has some other relatively big changes, e.g. how the ARC is managed, especially w.r.t. interaction with the kernel and marking its memory as reclaimable, we did not felt comfortable with including ZFS 2.3 already, but rather want to ensure those changes can mature and are well understood.
 
Very great Release!

Is there any documentation about Proxmox VE now provides an API that allows developers to write backup provider plugins ?

We have a few example plugins where that we currently polish for being a bit more presentable and available through a single git repo.
You can see the current version of two Perl plugins over here:
https://lore.proxmox.com/all/20250407120439.60725-1-f.ebner@proxmox.com/
And there is also a Rust based plugin to showcase that this is not locked to a single programming language, we'll try to get that all up until the end of the week.

And there's naturally basic documentation in the base module for the backup provider plugin API:
https://git.proxmox.com/?p=pve-stor...7;hb=0c6234f0bc9772320786ca7ce5f45018630557de
 
Hi,
@xenon96
We have a few example plugins where that we currently polish for being a bit more presentable and available through a single git repo.
You can see the current version of two Perl plugins over here:
https://lore.proxmox.com/all/20250407120439.60725-1-f.ebner@proxmox.com/
And there is also a Rust based plugin to showcase that this is not locked to a single programming language, we'll try to get that all up until the end of the week.

And there's naturally basic documentation in the base module for the backup provider plugin API:
https://git.proxmox.com/?p=pve-stor...7;hb=0c6234f0bc9772320786ca7ce5f45018630557de
Just to add: this is in the Perl POD documentation format. You can view it more easily using e.g. perldoc src/PVE/BackupProvider/Plugin/Base.pm. There are also tools for converting that to another format like pod2html.

A backup solution needs to implement the backup provider plugin (see above) as well as a regular storage plugin with quite basic functionality for listing backups, etc.. The storage plugin documentation is still being worked on, but again, you'll only need a few of the methods to make it work, see the examples @t.lamprecht mentioned.

EDIT: Also, feel free to ask on the Proxmox VE development mailing list if you have more specific questions.
 
Last edited:
Great! Will it still be possible to run Centos 7 containers?
CentOS Linux 7 reached end of life (EOL) on June 30, 2024, so you really should not run that release anymore.

That said, if they run now for you, they still should keep running fine with the new release.
But if you already run CentOS 7, it quite probably means that you changed the kernel command line to use cgroup v1, and support for doing that will go away with the next major PVE version (i.e. PVE 9, due later this year).
 
  • Like
Reactions: Kingneutron
Ok thanks for the info.

Is it possible to update Ceph Quincy to Squid? Or is it better to update first to Reef?

We strongly recommend doing upgrades without skipping versions, as noted in the respective Ceph Upgrade guides [1]:
Note, while it is possible to upgrade from the older Ceph Quincy (17.2+) to Squid (19.2+) release directly, but we primarily test and recommend upgrading to Ceph Reef first before upgrading to Ceph Squid. If you want to skip one upgrade we recommend testing this first on a non-production setup. The upgrade steps are the same as for upgrading from Reef to Squid, but you must ensure that you got no FileStore based OSD left, as FileStore support was removed with Ceph 18.2 Reef.


[1] https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid
 
Awesome news. One feature I'd like to see is an easier way to share GPU (or partials) with LXC for things like LLMs so I don't have to dedicate a whole card to an LXC that may not be running all the time. Maybe this is already possible but it seems it's a cumbersome process with some mapping user/groups and all this rigamarole that I could never quite get working... If there's a more recent guide to set this up, please point me at it.
 
Bummer about having to delete and recreate every Ceph OSD on our new cluster but at least it's an identified issue...

Edit: release note typo: "only OSDs that [where -> were] newly created using Ceph 19.2 Squid"
 
Last edited:
Any chance of virtiofs being available for containers? would love an easier alternative for unprivileged containers than bind mounts. Getting the uid/gid mappings correct is a nightmare.
 
  • Like
Reactions: DAE51D
Any chance of virtiofs being available for containers? would love an easier alternative for unprivileged containers than bind mounts. Getting the uid/gid mappings correct is a nightmare.
virtiofs is specific for qemu thus virtual machines, thus you can't use it in containers. It's quite unlikely that this will ever change since bind mounts are essentially virtiofs for lxc containers. I understand your feelings in regard how cumbersme dealing with mounts inside containers is but why dont' you use a vm instead? Most typical applications can be run from a docker container and if you put everything in one vm this doesn't need to use more resources than lxc containers. Another benefits: You have one vm where you need to do maintenance (system update etc) and won't run into issues with nested containers (like when you use docker containers inside a lxc which isn't recommended by the Proxmox developers). I personally use lxc containers only for stuff, which doesn't need any mounts (e.G. pihole) or if it needs hardware passthrough (like for using the igupu for transcoding in jellyfin or plex).
 
Last edited: