Proxmox VE 7.2 released!

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,741
223
We're excited to announce the release of Proxmox Virtual Environment 7.2. It's based on Debian 11.3 "Bullseye" but using a newer Linux kernel 5.15.30, QEMU 6.2, LXC 4, Ceph 16.2.7, and OpenZFS 2.1.4 and countless enhancements and bugfixes.

Here is a selection of the highlights
  • Support for the accelerated virtio-gl (VirGL) display driver
  • Notes templates for backup jobs (e.g. add the name of your VMs and CTs to the backup notes)
  • Ceph erasure code support
  • Updated existing and new LXC container templates (New: Ubuntu 22.04, Devuan 4.0, Alpine 3.15)
  • ISO: Updated memtest86+ to the completely rewritten 6.0b version, adding support for UEFI and modern memory like DDR5
  • and many more GUI enhancements
As always, we have included countless bugfixes and improvements on many places; see the release notes for all details.

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_7.2

Press release
https://www.proxmox.com/en/news/press-releases/proxmox-virtual-environment-7-2-available

Video tutorial
https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-7-2

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
https://enterprise.proxmox.com/iso

Documentation
https://pve.proxmox.com/pve-docs

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

We want to shout out a big THANK YOU to our active community for all your intensive feedback, testing, bug reporting and patch submitting!

FAQ
Q: Can I upgrade Proxmox VE 7.0 or 7.1 to 7.2 via GUI?
A: Yes.

Q: Can I upgrade Proxmox VE 6.4 to 7.2 with apt?
A: Yes, please follow the upgrade instructions on https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0

Q: Can I install Proxmox VE 7.2 on top of Debian 11.x "Bullseye"?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye

Q: Can I upgrade my Proxmox VE 6.4 cluster with Ceph Octopus to 7.2 with Ceph Octopus/Pacific?
A: This is a two step process. First, you have to upgrade Proxmox VE from 6.4 to 7.2, and afterwards upgrade Ceph from Octopus to Pacific. There are a lot of improvements and changes, so please follow exactly the upgrade documentation:
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
https://pve.proxmox.com/wiki/Ceph_Octopus_to_Pacific

Q: Where can I get more information about feature updates?
A: Check the roadmap, forum, the mailing list, and/or subscribe to our newsletter.

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
Great to see some additions of features often requestes here in the forums like enhanced restore dialog, configurable VMID ranges, moving disks between guests in GUI, more powerfull backup scheduler and so on.
 
Last edited:
Thanks a lot !! Looking forward.
Sorry for the noob question, a simple GUI based update is enough or do we have to do "apt dist-upgarde" or such from 7.1 to 7.2?
 
Sorry for the noob question, a simple GUI based update is enough or do we have to do "apt dist-upgarde" or such from 7.1 to 7.2?
The GUI will always do an apt dist-upgrade as that's the required way to upgrade Proxmox VE in any way (major or minor).

For the record: But note that it's normally safer to use SSH for bigger upgrades as the API pveproxy daemon, which is providing the web shell for the upgrade, gets restarted too; while that is handled gracefully for minor updates, a major upgrades (6.x to 7.x) that can cause trouble.
 
  • Like
Reactions: Tmanok and mokaz
I have had a couple of VM:s misbehaving after upgrading to PVE 7.2. The ones doing so were all live migrated from a 7.1 node (node A) to an upgraded and rebooted 7.2 node (node B), and later back to the upgraded and rebooted 7.2 node (node A again).

The symptoms were high CPU usage on at least 1 one core, and non-responding or partly functional networking. Machines did not respond to shutdown so I was forced to reset. But they seem to work fine after reset and restart.

Mostly Debian and Ubuntu VM:s but also a pfSense instance.

Just a heads up.
 
Thanks, nice release with a lot of new stunning features!
Where can I access the erasure coding pool setup? Its not there:

1651747025447.png
 
  • Like
Reactions: Tmanok and jsterr
Is there a way to change the default "note" of manual backups to something else than "{{guestname}}"?
 
No, there's no such way. Do you create many manual backups? As with a backup job one would need to write the notes template only once.
I already changed the backup jobs to "{{node}} - {{guestname}} ({{vmid}}): ". Would be great to be able to change the default for manual backups too so its consistent and I don't have to type it in every time. I always do manual backups, set them to protected and write a note what I am planning to do before doing a bigger change of a guest. So in case I do something wrong I could restore a 'stop mode' backup from right before my change instead restoring a 'snapshot mode' backup that might be up to 24hours old. So I do manual backups quite often.
 
Last edited:
  • Like
Reactions: Tmanok
Maybe we could do this frontend side only, i.e., make the notes' field from the manual backup one stateful, saving the last used one in the browser local storage for reuse, would allow avoiding yet another config option.
 
  • Like
Reactions: Darkk
pve-kernel-5.15.30-2-pve completely brakes GUI functionality. The cluster works as expected and can be managed via CLI on the different nodes (stop, start, and migrate - also VM's running in HA mode) but no way to do it from the GUI. Booting nodes using pve-kernel-5.13.19-6-pve and the GUI returns to expected functionality again. It seems like the GUI fails to communicate via network so my question is: What has changed with the network in pve-kernel-5.15.30-2-pve?
 
Super excited to upgrade to 7.2! All of the new features look fantastic!

Question about VirGL:
Does this mean that VM graphics processing can be sent to the host and processed as OpenGL commands? If so, does this mean that the host's GPU(s) can process guest graphics??

Thanks a ton for your continued efforts!


Tmanok
 
pve-kernel-5.15.30-2-pve completely brakes GUI functionality. The cluster works as expected and can be managed via CLI on the different nodes (stop, start, and migrate - also VM's running in HA mode) but no way to do it from the GUI. Booting nodes using pve-kernel-5.13.19-6-pve and the GUI returns to expected functionality again. It seems like the GUI fails to communicate via network so my question is: What has changed with the network in pve-kernel-5.15.30-2-pve?
You likely run into the QNAP NFS known issue as you already noticed yourself, at least I'd think so replying in the other thread.
This is not a network issue, but the QNAP advertising features not 100% correctly causing the newer kernel to enable something that the NFS server cannot provide, causing uninterruptible sleeps (which are literally uninterruptible, not even by any kill signal) and that blocks the pvestatd, which in turn makes the GUI mark everything as unknown, not ideal we know, but uninterruptible things are notorious hard to deal sanely with without further blocking the system.
 
Does this mean that VM graphics processing can be sent to the host and processed as OpenGL commands? If so, does this mean that the host's GPU(s) can process guest graphics??
The VM opens a rendering device node of the hosts OpenGL and the VirGL driver then can use that to redirect any OpenGL command the guest OS or programs running in it submits.

IOW and quite simplified, this makes guest programs access the host GPU somewhat like programs running on the host would, just in a slightly more restricted way. The advantages are that no special HW or software is required, just simple OpenGL drivers, and that the GPU can be shared between multiple VMs (albeit performance may degrade), the drawback compared to full pass through is that it's restricted to OpenGL (so, e.g., no CUDA or the like) and performance may be less compared to passthrough, as there's some overhead.
 
  • Like
Reactions: Tmanok
Can you share a config of such an affected VM? Also, what's the underlying storage the VM disks are on?


Debian 9 VM
1 CPU core spiked sometime during the upgrade and didn't go down for almost 12 hours until reset. Average CPU graph for the last month looks like a hockey stick. Storage is Ceph RBD.

Not much to go on but here's the VM config, slightly redacted.

Code:
agent: 1
balloon: 1024
bootdisk: scsi0
cores: 2
cpu: kvm64,flags=+aes
description:
ide2: none,media=cdrom
memory: 3072
name:
net0: virtio=xx:xx:xx:xx:xx:xx,bridge=vmbrX,firewall=1
numa: 0
onboot: 1
ostype: l26
protection: 1
scsi0: ceph_ssd:vm-145-disk-0,size=10G,ssd=1
scsihw: virtio-scsi-pci
sockets: 1
vga: qxl

Disk config is default so u_ring, no cache and so on.

EDIT: Backup to a PBS instance during the CPU spike succeeded but it could not interact with the QEMU guest agent.
Code:
INFO: skipping guest-agent 'fs-freeze', agent configured but not running?
 
Last edited:
Maybe we could do this frontend side only, i.e., make the notes' field from the manual backup one stateful, saving the last used one in the browser local storage for reuse, would allow avoiding yet another config option.
Not sure if that would make it better. I like it that I now can directly add my notes for the backup without needing to edit the notes afterwards. In that case it would mean that I have to delete alot of text from the last manual backup.
What about a "save note as default" button or checkbox next to the notes field so it will only save it as default note text if you really want it and not each time? Could also be hidden behind a "show advanced options" thing like used in other places in the GUI.

A "protected" checkbox there would be great too so you don't need to set it as protected afterwards each time. Especially if that backup takes some time, you do somethung else meanwhile and then maybe forget it later and the backups gets purged by accident. I also tried to set it as protected while the backup is running but that won't work because the backup is locked until its finished.
 
Last edited:
  • Like
Reactions: KB19

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!