Proxmox VE 7.3 released!

There already is one for HA, the migrate shutdown-policy. And yes, once set it will be used as better selection for target nodes.
For non-HA it's planned but needs much more extra work (especially if you consider more than one narrow use-case with everything shared already, which could just use HA anyway), once the mechanism is there the new CRS stuff can be used there too, but as said, it's "just" the selection, not the underlying mechanism to handle such stuff - that's done by the HA stack that got all the required information already.
I think what they were asking about, at leadt that's what I am asking for, is the possibility to essentially activate the shutdown migrate policy without shutting down.

I want to be able to set the node to maintenance, having the LRM mark itself as unavailable so the (HA) VMs get evacuated. Then I can work on the node still running, like upgrade, change networking, do anything else that requires the node to be running.

When done, I can unmark the node as being in maintenance and (in the future, with issue 2115 perhaps), VMs can rebalance back to the node, and I can continue with the next node.
 
  • Like
Reactions: ozdjh
No, it just is required to clear browser caching and to ensure that you ain't connected over a worker of the old (pre-upgrade) API daemon anymore.
OK but is it not possible to enforce this ? It is confusing to not see a feature (cpu affinity is the reason I am asking) even after a restart.
 
OK but is it not possible to enforce this ? It is confusing to not see a feature (cpu affinity is the reason I am asking) even after a restart.

Yes, if you tell the browser to turn off caching completely, which naturally makes the interface less efficient. Note that we try already a lot to push the browsers in the right directions, like including the version as request query parameter in the index to make it change on updates and thus request a different URL than before, but browsers are beasts and not always considerate ones. We're checking periodically if there's something better, but still reasonable to do; if we find something we'll improve it.
 
Yes, if you tell the browser to turn off caching completely, which naturally makes the interface less efficient. Note that we try already a lot to push the browsers in the right directions, like including the version as request query parameter in the index to make it change on updates and thus request a different URL than before, but browsers are beasts and not always considerate ones. We're checking periodically if there's something better, but still reasonable to do; if we find something we'll improve it.

OK I see, thanks for the info.
 
I think what they were asking about, at leadt that's what I am asking for, is the possibility to essentially activate the shutdown migrate policy without shutting down.

I want to be able to set the node to maintenance, having the LRM mark itself as unavailable so the (HA) VMs get evacuated. Then I can work on the node still running, like upgrade, change networking, do anything else that requires the node to be running.

When done, I can unmark the node as being in maintenance and (in the future, with issue 2115 perhaps), VMs can rebalance back to the node, and I can continue with the next node.
That can indeed be implemented now without all too much trouble (knocks wood) but also independent of the new CRS, which will give just better node targets for balancing out.
 
Last edited:
  • Like
Reactions: oz1cw7yymn
That can indeed be implemented now without all too much trouble (knocks wood) but also independent of the new CRS, which will give just better node targets for balancing out.
That would be very useful! Is there already on the radar or even a feature request in Bugzilla or should someone (me) add it?
 
That would be very useful! Is there already on the radar or even a feature request in Bugzilla or should someone (me) add it?
There's one for the general non-HA-thing, but I don't think there's one for enabling the for shutdown already existing mechanism on runtime, so yeah please add an enhancement request and maybe mention this thread to avoid that a colleague directly merges it as duplicate with the non-HA one.
 
Quick newbie question, what's the best way to upgrade a live (single) system? Should I shutdown all the VMs first - upgrade - then reboot, or would it be fine to upgrade running VMs live, and then reboot when later on (during night for example)?
 
Quick newbie question, what's the best way to upgrade a live (single) system? Should I shutdown all the VMs first - upgrade - then reboot, or would it be fine to upgrade running VMs live, and then reboot when later on (during night for example)?
It normally is fine to upgrade while the VMs keep running.

Note though that you would only need to reboot the node to boot up any updated kernel (e.g., check pveversion -v to see if there's a newer kernel in the list than the running one mentioned on top), otherwise that isn't necessary either.
 
  • Like
Reactions: enoch85
but how do I change the tag color?
Datacenter -> Options -> Edit: Tag Style -> Color Overrides
or list by tags
to be done in the GUI, the global serach at the top (CTRL + SHIFT + F) allows one to search for tags too already though, should cover a lot of use cases in the mean time.
 
Last edited:
  • Like
Reactions: pikzigmar
That can indeed be implemented now without all too much trouble (knocks wood) but also independent of the new CRS, which will give just better node targets for balancing out.
Ok cool.
That's really a missing feature, as sometime you need to do tuning or network change, and you want to set the node in maintenance without shutdown it.
 
As the questions are somewhat related I'll try to clear that up with a short explanation of how this is handled in a bit more general way:

It actually depends on two things, for once the OS Type/Version to match the quoted Windows >= 8 or Linux >= 2.6 (should be a given, as older ones are EOL anyway) and as second condition the VM machine version.

That is the version we use to ensure backward compatibility for things like migration or live-snapshots (those with RAM included), as there the virtual HW cannot change much, as it would break the running OS otherwise. As Windows is a much less elaborate and flexible OS than Linux, it also cannot cope with often even small changes too well, so we always pin Windows machines to the latest version available during the VM creation. For Linux OTOH it's just a matter of doing a full re-start so that it freshly starts up with the new QEMU.

You can manage and update any pinned machine version in the VM's Hardware tab on the web UI.

Hope that helps.
It does. Thank you! :)

This is a Linux 2.6 OS, and my Machine type is set to Default/i440fx (Latest). I think that means this one is up to date and I don't have to recreate it to get the new hot plug USB?

What's the easiest way to test if I have the new USB functionality enabled?
 
we're very excited to announce the release of Proxmox Virtual Environment 7.3. It's based on Debian 11.5 "Bullseye" but using a newer Linux kernel 5.15 or 5.19, QEMU 7.1, LXC 5.0.0, and ZFS 2.1.6.

Proxmox Virtual Environment 7.3 comes with initial support for Cluster Resource Scheduling, enables updates for air-gapped systems with the new Proxmox Offline Mirror tool, and has improved UX for various management tasks, as well as interesting storage technologies like ZFS dRAID and countless enhancements and bugfixes.

Here is a selection of the highlights
  • Debian 11.5 "Bullseye", but using a newer Linux kernel 5.15 or 5.19
  • QEMU 7.1, LXC 5.0.0, and ZFS 2.1.6
  • Ceph Quincy 17.2.5 and Ceph Pacific 16.2.10; heuristical checks to see if it is safe to stop or remove a service instance (MON, MDS, OSD)
  • Initial support for a Cluster Resource Scheduler (CRS)
  • Proxmox Offline Mirror - https://pom.proxmox.com/
  • Tagging virtual guests in the web interface
  • CPU pinning: Easier affinity control using taskset core lists
  • New container templates: Fedora, Ubuntu, Alma Linux, Rocky Linux
  • Reworked USB devices: can now be hot-plugged
  • ZFS dRAID pools
  • Proxmox Mobile: based on Flutter 3.0
  • And many more enhancements.
As always, we have included countless bugfixes and improvements on many places; see the release notes for all details.

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_7.3

Press release
https://www.proxmox.com/en/news/press-releases/proxmox-virtual-environment-7-3

Video tutorial
https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-7-3

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
https://enterprise.proxmox.com/iso

Documentation
https://pve.proxmox.com/pve-docs

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

We want to shout out a big THANK YOU to our active community for all your intensive feedback, testing, bug reporting and patch submitting!

FAQ
Q: Can I upgrade Proxmox VE 7.0 or 7.1 or 7.2 to 7.3 via GUI?
A: Yes.

Q: Can I upgrade Proxmox VE 6.4 to 7.3 with apt?
A: Yes, please follow the upgrade instructions on https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0

Q: Can I install Proxmox VE 7.3 on top of Debian 11.x "Bullseye"?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye

Q: Can I upgrade my Proxmox VE 6.4 cluster with Ceph Octopus to 7.3 with Ceph Octopus/Pacific/Quincy?
A: This is a three step process. First, you have to upgrade Proxmox VE from 6.4 to 7.3, and afterwards upgrade Ceph from Octopus to Pacific and to Quincy. There are a lot of improvements and changes, so please follow exactly the upgrade documentation:
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
https://pve.proxmox.com/wiki/Ceph_Octopus_to_Pacific
https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy

Q: Where can I get more information about feature updates?
A: Check the roadmap, forum, the mailing list, and/or subscribe to our newsletter.

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
Does anyone experience to be logged out of the web-interface of this new version after a few minutes with this message "permission denied - invalid PVE ticke (401)". Because I do, and I can't seem to find a setting for timeout like this. Is this a bug?
Untitled.png
 
What's the easiest way to test if I have the new USB functionality enabled?
Add a SPICE USB to that VM bia its Hardware tab when it's running. That is just a virtual port for ise with the spice virt viewer, so it avoids passing through an actual (possibly in use) device or port. If it doesn't stays pending (orange font) it got hot plugged.
 
Does anyone experience to be logged out of the web-interface of this new version after a few minutes with this message "permission denied - invalid PVE ticke (401)". Because I do, and I can't seem to find a setting for timeout like this. Is this a bug?
View attachment 43623
Can you try clearing the browser cache an retry? Also any third party packages installed that mess with the system?
 
Add a SPICE USB to that VM bia its Hardware tab when it's running. That is just a virtual port for ise with the spice virt viewer, so it avoids passing through an actual (possibly in use) device or port. If it doesn't stays pending (orange font) it got hot plugged.
It worked! I'm going to assume I can leave all this alone and call it working, then. :)

Thanks!
 
Can you try clearing the browser cache an retry? Also any third party packages installed that mess with the system?
Hi
I've tried it in Firefox and also tried using "Incognito-mode" on Chrome. Same result, I'm kicked out after a few minutes.
And I have absolutely NO 3rd-party software on this system. I've installed some Debian packages like deborphan, members, and ethtool, but I can't imagine them interfering with this.
This is a cluster test-system that I have for exactly this kind of thing, and it's configured in precisely the same way as the production-system.
I'll be glad to do some digging around to solve this. And it might have something to do with my setup. Especially if nobody else is experiencing this.
And I have licenses for both systems.
 
Hi
I've tried it in Firefox and also tried using "Incognito-mode" on Chrome. Same result, I'm kicked out after a few minutes.
And I have absolutely NO 3rd-party software on this system. I've installed some Debian packages like deborphan, members, and ethtool, but I can't imagine them interfering with this.
This is a cluster test-system that I have for exactly this kind of thing, and it's configured in precisely the same way as the production-system.
I'll be glad to do some digging around to solve this. And it might have something to do with my setup. Especially if nobody else is experiencing this.
And I have licenses for both systems.
please check
- that all nodes in the cluster have their clocks synchronized with NTP and agree on the current time ;)
- the timestamp of /etc/pve/authkey*, it should be within the last 24h
-- if it is further in the past your cluster is likely not quorate! check logs of corosync and pve-cluster and output of pvecm status
-- if it is in the future, one of your nodes likely had its clock set wrong, a simple touch /etc/pve/authkey.* should reset it and fix the problem
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!