Proxmox VE 7.3 released!

I did refresh and upgrade on both of my nodes, they are both still on 7.2-3. I also tried apt update && apt dist-upgrade from the shell, no upgrade.
Do you have an active Proxmox VE repository configured? Check the Repository tab, it should state that you get upgrades from PVE.
 
I had a problem upgrading from 7.2 -> 7.3 on my home machine. I think a reboot will fix it, but I want to report it. Is replying in-thread the correct place to do this, or should I start a fresh thread?
 
Any plans to add BTRFS for storage replication?
Nothing actively worked on but improving & extending the storage replication is generally something we'd like to do eventually and there it will be definitively something that we'll evaluate too.
 
  • Like
Reactions: EuroDomenii
I had a problem upgrading from 7.2 -> 7.3 on my home machine. I think a reboot will fix it, but I want to report it. Is replying in-thread the correct place to do this, or should I start a fresh thread?
It can be ok to handle small things directly here, but often it's a bit better to directly open a new thread, so that if one needs to ask a bit of follow-up information or the like doesn't get lost (or clutters) the general thread - thanks for your considerations!
 
To upgrade from Proxmox 7.2 to Proxmox 7.3 do:

Download Repository Key, Verify Hash and Add Repository


wget https://enterprise.proxmox.com/debian/proxmox-release-bullseye.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg

sha512sum /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg

echo "deb [arch=amd64] http://download.proxmox.com/debian/pve bullseye pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list

apt update

To check the update list
apt list --upgradable

To update
apt update && apt dist-upgrade
 
  • Like
Reactions: okiedokie
To upgrade from Proxmox 7.2 to Proxmox 7.3 do:

Download Repository Key, Verify Hash and Add Repository


wget https://enterprise.proxmox.com/debian/proxmox-release-bullseye.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg

sha512sum /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg

echo "deb [arch=amd64] http://download.proxmox.com/debian/pve bullseye pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list

apt update

To check the update list
apt list --upgradable

To update
apt update && apt dist-upgrade
not really, the repos are hopefully already configured on 7.2, so just click refresh/upgrade on the GUI:

if the repository is not configured, just do this via GUI.
 
See https://pve.proxmox.com/pve-docs/chapter-ha-manager.html#ha_manager_crs

Basically it's the foundation for that. For now, it's limited to the actions where the HA stack had to find a new node already (recovering fenced services, migrate shutdown-policy & HA group changes), and it uses the static-load (configured CPUs and Memory, with memory having much more weight). We're actively working on extending that; but found the current version already a big improvement for the HA and releasing in smaller steps makes always sense.
Will this be used to introduce a "node maintenance mode"? We still really need a way to easily take a node out of production so we can work on it. At the moment we still use some homegrown scripts to migrate VMs around if we want to drain a node.
 
What's the interface for the new taskset CPU pinning? I'm not seeing anything new in the web interface right now.
Try a CTRL+F5 in your brower to delete the cache and then got to the VM hardware tab and edit the CPU.
 
Congratulations on the release!

A couple of rookie questions:
"New VMs use qemu-xhci USB controller, if supported by the guest OS (Windows >= 8, Linux >= 2.6)"

  1. It sounds like existing VMs that use USB passthrough don't automatically get access to the qemu-XHCI controller? How would we check to see which controller is being used inside the VM?
  2. Is it possible to adjust an existing VM to use qemu-XHCI if it's not already? If so, how do we do that?
Thanks!
 
Upgrade 7.2 to 7.3 went without problems, (on ZFS here).

Memo:
Code:
# optional: shutdown VM/LXC
zfs snapshot rpool/ROOT/pve-1@Stable_20221122_PM72
byobu # or any other tmux
apt update
apt dist-upgrade
# Afterwards:
# - refresh browser cache with CTRL+F5 (!)
# - exit byobu (CTRL+F6) and reboot
# - check `dmesg -wHT`
# optional: `apt autoremove`

The snapshot 7.2->7.3 consumes about 342M on my machine.
 
Last edited:
Is the need of Ctrl F5 a bug/known issue or does it have a purpose ? I find it quite strange.
No, it just is required to clear browser caching and to ensure that you ain't connected over a worker of the old (pre-upgrade) API daemon anymore.
 
Will this be used to introduce a "node maintenance mode"? We still really need a way to easily take a node out of production so we can work on it. At the moment we still use some homegrown scripts to migrate VMs around if we want to drain a node.
There already is one for HA, the migrate shutdown-policy. And yes, once set it will be used as better selection for target nodes.
For non-HA it's planned but needs much more extra work (especially if you consider more than one narrow use-case with everything shared already, which could just use HA anyway), once the mechanism is there the new CRS stuff can be used there too, but as said, it's "just" the selection, not the underlying mechanism to handle such stuff - that's done by the HA stack that got all the required information already.
 
  1. It sounds like existing VMs that use USB passthrough don't automatically get access to the qemu-XHCI controller? How would we check to see which controller is being used inside the VM?
  2. Is it possible to adjust an existing VM to use qemu-XHCI if it's not already? If so, how do we do that?
As the questions are somewhat related I'll try to clear that up with a short explanation of how this is handled in a bit more general way:

It actually depends on two things, for once the OS Type/Version to match the quoted Windows >= 8 or Linux >= 2.6 (should be a given, as older ones are EOL anyway) and as second condition the VM machine version.

That is the version we use to ensure backward compatibility for things like migration or live-snapshots (those with RAM included), as there the virtual HW cannot change much, as it would break the running OS otherwise. As Windows is a much less elaborate and flexible OS than Linux, it also cannot cope with often even small changes too well, so we always pin Windows machines to the latest version available during the VM creation. For Linux OTOH it's just a matter of doing a full re-start so that it freshly starts up with the new QEMU.

You can manage and update any pinned machine version in the VM's Hardware tab on the web UI.

Hope that helps.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!