Proxmox VE 7.3 released!

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
748
1,626
223
we're very excited to announce the release of Proxmox Virtual Environment 7.3. It's based on Debian 11.5 "Bullseye" but using a newer Linux kernel 5.15 or 5.19, QEMU 7.1, LXC 5.0.0, and ZFS 2.1.6.

Proxmox Virtual Environment 7.3 comes with initial support for Cluster Resource Scheduling, enables updates for air-gapped systems with the new Proxmox Offline Mirror tool, and has improved UX for various management tasks, as well as interesting storage technologies like ZFS dRAID and countless enhancements and bugfixes.

Here is a selection of the highlights
  • Debian 11.5 "Bullseye", but using a newer Linux kernel 5.15 or 5.19
  • QEMU 7.1, LXC 5.0.0, and ZFS 2.1.6
  • Ceph Quincy 17.2.5 and Ceph Pacific 16.2.10; heuristical checks to see if it is safe to stop or remove a service instance (MON, MDS, OSD)
  • Initial support for a Cluster Resource Scheduler (CRS)
  • Proxmox Offline Mirror - https://pom.proxmox.com/
  • Tagging virtual guests in the web interface
  • CPU pinning: Easier affinity control using taskset core lists
  • New container templates: Fedora, Ubuntu, Alma Linux, Rocky Linux
  • Reworked USB devices: can now be hot-plugged
  • ZFS dRAID pools
  • Proxmox Mobile: based on Flutter 3.0
  • And many more enhancements.
As always, we have included countless bugfixes and improvements on many places; see the release notes for all details.

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_7.3

Press release
https://www.proxmox.com/en/news/press-releases/proxmox-virtual-environment-7-3

Video tutorial
https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-7-3

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
https://enterprise.proxmox.com/iso

Documentation
https://pve.proxmox.com/pve-docs

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

We want to shout out a big THANK YOU to our active community for all your intensive feedback, testing, bug reporting and patch submitting!

FAQ
Q: Can I upgrade Proxmox VE 7.0 or 7.1 or 7.2 to 7.3 via GUI?
A: Yes.

Q: Can I upgrade Proxmox VE 6.4 to 7.3 with apt?
A: Yes, please follow the upgrade instructions on https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0

Q: Can I install Proxmox VE 7.3 on top of Debian 11.x "Bullseye"?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye

Q: Can I upgrade my Proxmox VE 6.4 cluster with Ceph Octopus to 7.3 with Ceph Octopus/Pacific/Quincy?
A: This is a three step process. First, you have to upgrade Proxmox VE from 6.4 to 7.3, and afterwards upgrade Ceph from Octopus to Pacific and to Quincy. There are a lot of improvements and changes, so please follow exactly the upgrade documentation:
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
https://pve.proxmox.com/wiki/Ceph_Octopus_to_Pacific
https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy

Q: Where can I get more information about feature updates?
A: Check the roadmap, forum, the mailing list, and/or subscribe to our newsletter.

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
Initial support for a Cluster Resource Scheduler (CRS)

are there any more information about it?

is it a automatic migration of VMs based on used resources (RAM/CPU)?
 
Last edited:
Initial support for a Cluster Resource Scheduler (CRS)

are there any more information about it?

is it a automatic migration of VMs based on used resources (RAM/CPU)?
See https://pve.proxmox.com/pve-docs/chapter-ha-manager.html#ha_manager_crs

Basically it's the foundation for that. For now, it's limited to the actions where the HA stack had to find a new node already (recovering fenced services, migrate shutdown-policy & HA group changes), and it uses the static-load (configured CPUs and Memory, with memory having much more weight). We're actively working on extending that; but found the current version already a big improvement for the HA and releasing in smaller steps makes always sense.
 
See https://pve.proxmox.com/pve-docs/chapter-ha-manager.html#ha_manager_crs

Basically it's the foundation for that. For now it's limited to the actions where the HA stack had to find a new node already (recovering fenced services, migrate shutdown-policy & HA group changes), and it uses the static-load (configured CPUs and Memory, with memory having much more weight). We're actively working on extending that; but found the current version already a big improvement for the HA and releasing in smaller steps makes always sense.
Oh great! Looking forward to the development of it. I think this is a well requested feature an I like to see it working!

Thanks for your good work!
 
Last edited:
Thanks for another great release!
From the release notes:
In the web interface, new VMs default to iothread enabled and VirtIO SCSI-Single selected as SCSI controller (if supported by the guest OS)
The admindoc doesn´t seem to reflect this change yet. As the the default is now virtio-scsi-single with iothread enabled, would your advise be to also change this in existing VM's? Most important, would there be any pitfalls or downsides?
 
  • Like
Reactions: SInisterPisces
Great :)
Finally draid and tagging :)

Why is draid only recommended for 50+ disks? From what I read it sounded like a good alternative to raidz because of faster resilvering times, as something like a 8x 20TB raidz2 could take an eternity to resilver.

Edit:
And don't forget to do a CTRL+F5 when first vitising the webUI after the upgrade. Encountered problems in the past when using an outdated browser-cached webUI with a newer mayor/minor version PVE.
 
Last edited:
  • Like
Reactions: log and guletz
As the the default is now virtio-scsi-single with iothread enabled, would your advise be to also change this in existing VM's? Most important, would there be any pitfalls or downsides?
It depends a bit on your workload and underlying storage, but in general it can help to reduce some IO pressure or even hangs from the guest, especially in backup or snapshot situations - which both can put an additional amount of IO load on the storage and guest in general. We know of setups that saw a big improvement with IO-Threads and some, where enabling it basically had no significant (visible) impact. Pitfalls are not that many, but IO-Threads is still something with relatively a lot of change going on, albeit they became pretty stable in recent PVE releases. Also, if an issue really pops up, they can always be disabled until fixed.
 
  • Like
Reactions: janssensm
Why is draid only recommended for 50+ disks? From what I read it sounded like a good alternative to raidz because of faster resilvering times, as something like a 8x 20TB raidz2 could take an eternity to resilver.
That was worded a bit strangely and the limit was really set too high, slipped my proofreading. Adapted it to better resemble what we also write in the reference docs: https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_zfs_draid
 
Can you please detail more about this from the Roadmap:
Cross-cluster migration mechanism - foundation and CLI integration released with Proxmox VE 7.3
Is this possible?

Best regards,
Rares
 
Can you please detail more about this from the Roadmap:
> Cross-cluster migration mechanism - foundation and CLI integration released with Proxmox VE 7.3
Is this possible?
Check the qm manpage for the remote-migrate command and/or the commit that introduced it. As the road map hints, it's meant as foundation for a bigger integration we're developing, but an admin not fearing the CLI can already try it now (still experimental!).
 
  • Like
Reactions: linux and herzkerl
Thanks for the new release!

I see that Ceph Octopus is no longer officially supported in this version. Since the upgrade guide for 6to7 mentions that you first have to update to Proxmox 7 and then to Ceph Pacific, I assume that Octopus still works, right? Is it necessary to update to Pacific immediately after updating to 7.3, or is it possible to wait a few weeks?

Best regards, Roel
 
I see that Ceph Octopus is no longer officially supported in this version.
Yes, Ceph Octopus is End of Life now (upstream and at Proxmox), we recommend all to upgrade to Pacific and then also Quincy.
Since the upgrade guide for 6to7 mentions that you first have to update to Proxmox 7 and then to Ceph Pacific, I assume that Octopus still works, right?
Yes, it still works; but please note that Proxmox VE 6.4 is also EOL since a few months, so I'd really recommend upgrading sooner than later, as it may not get easier and especially help may be harder to get once specific of the older versions slowly are forgotten.
Is it necessary to update to Pacific immediately after updating to 7.3, or is it possible to wait a few weeks?
From a pure functional POV it's not necessary to upgrade at all, nothing is designed to stop working on its own after EOL. Just that bug, and thus security, updates are not coming anymore for EOL versions. It definitively still makes sense to not rush an upgrade and to ensure all works out before continuing in the upgrade process, even if you still run setups you couldn't update to the currently PVE/Ceph software stack, for whatever reason that may be.
 
Yes, Ceph Octopus is End of Life now (upstream and at Proxmox), we recommend all to upgrade to Pacific and then also Quincy.

Yes, it still works; but please note that Proxmox VE 6.4 is also EOL since a few months, so I'd really recommend upgrading sooner than later, as it may not get easier and especially help may be harder to get once specific of the older versions slowly are forgotten.

From a pure functional POV it's not necessary to upgrade at all, nothing is designed to stop working on its own after EOL. Just that bug, and thus security, updates are not coming anymore for EOL versions. It definitively still makes sense to not rush an upgrade and to ensure all works out before continuing in the upgrade process, even if you still run setups you couldn't update to the currently PVE/Ceph software stack, for whatever reason that may be.
Great, thanks for clarifying! Upgrades to 7.x are already planned, but glad to know that we don't have to go to Pacific immediately after!

Best regards, Roel
 
So we can finally hot-plug USB ports/devices from the web interface?
yes that should work for new guests (machine version >= 7.1 and ostype windows >= 8 and linux >= 2.6)
 
So we can finally hot-plug USB ports/devices from the web interface?
Yes, after upgrading and ensuring that the VM actually started up with the new QEMU.

Note that Windows VMs are machine-version pinned due to them being quite inflexible in general, so there you'd need to update the pinned machine version too (UI -> VM -> Hardware -> Edit Machine)
 
  • Like
Reactions: janssensm
Apologies for me newbie question.
What's the best way to upgrade from 7.2-3? Do I need up update my sources file and then do a dist-upgrade?
 
Apologies for me newbie question.
What's the best way to upgrade from 7.2-3? Do I need up update my sources file and then do a dist-upgrade?
No, just do a normal upgrade using the webUI. Don't forget to click the update button first, so PVE sees that there are new updates available.
 
What's the best way to upgrade from 7.2-3? Do I need up update my sources file and then do a dist-upgrade?
If you are running PVE 7.2 you not need to update the sources, just apt update && apt dist-upgrade or from PVE Web UI
 
If you are running PVE 7.2 you not need to update the sources, just apt update && apt dist-upgrade or from PVE Web UI
I did refresh and upgrade on both of my nodes, they are both still on 7.2-3. I also tried apt update && apt dist-upgrade from the shell, no upgrade.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!