Proxmox VE 7.1 released!

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,740
223
we're excited to announce the release of Proxmox Virtual Environment 7.1. It's based on Debian 11.1 "Bullseye" but using a newer Linux kernel 5.13, QEMU 6.1, LXC 4.0, Ceph 16.2.6, and OpenZFS 2.1. and countless enhancements and bugfixes.

Proxmox Virtual Environment brings several new functionalities and many improvements for management tasks in the web interface: support for Windows 11 including TPM, enhanced creation wizard for VM/container, ability to set backup retention policies per backup job in the GUI, and a new scheduler daemon supporting more flexible schedules..

Here is a selection of the highlights
  • Debian 11.1 "Bullseye", but using a newer Linux kernel 5.13
  • LXC 4.0, Ceph 16.2.6, QEMU 6.1, and OpenZFS 2.1
  • VM wizard with defaults for Windows 11 (q35, OVMF, TPM)
  • New backup scheduler daemon for flexible scheduling options
  • Backup retention
  • Protection flag for backups
  • Two-factor Authentication: WebAuthn, recovery keys, multiple factors for a single account
  • New container templates: Fedora, Ubuntu, Alma Linux, Rocky Linux
  • and many more enhancements, bugfixes, etc.
As always, we have included countless bugfixes and improvements on many places; see the release notes for all details.

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_7.1

Press release
https://www.proxmox.com/en/news/press-releases/proxmox-virtual-environment-7-1-released

Video tutorial
https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-7-1

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
http://download.proxmox.com/iso

Documentation
https://pve.proxmox.com/pve-docs

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

We want to shout out a big THANK YOU to our active community for all your intensive feedback, testing, bug reporting and patch submitting!

FAQ
Q: Can I upgrade Proxmox VE 7.0 to 7.1 via GUI?
A: Yes.

Q: Can I upgrade Proxmox VE 6.4 to 7.1 with apt?
A: Yes, please follow the upgrade instructions on https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0

Q: Can I install Proxmox VE 7.1 on top of Debian 11.1 "Bullseye"?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye

Q: Can I upgrade my Proxmox VE 6.4 cluster with Ceph Octopus to 7.1 with Ceph Octopus/Pacific?
A: This is a two step process. First, you have to upgrade Proxmox VE from 6.4 to 7.1, and afterwards upgrade Ceph from Octopus to Pacific. There are a lot of improvements and changes, so please follow exactly the upgrade documentation:
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
https://pve.proxmox.com/wiki/Ceph_Octopus_to_Pacific

Q: Where can I get more information about feature updates?
A: Check the roadmap, forum, the mailing list, and/or subscribe to our newsletter.

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
Hi,

Gerat... but one question regarding the scheduler: Why don't you just allow the entry of crontab-style values directly? At least, when I tried, it didn't work.

Thanks
Tobias
 
Gerat... but one question regarding the scheduler: Why don't you just allow the entry of crontab-style values directly? At least, when I tried, it didn't work.
CRON is not used anymore, the more flexible scheduling format is basing on systemd's calendar events which we use for all new scheduling operations, like Proxmox VE Replication since a few years and also in Proxmox Backup Server for GC, Verify or Sync jobs: https://pbs.proxmox.com/docs/calendarevents.html
 
  • Like
Reactions: velocity08
Additional disks can now be added from the creation wizard, eliminating the need to add them after creating the VM or Container.
Allow re-assigning a disk to another VM.
Multiple CephFS instances are supported.
Yes, very good. Thanks - cant wait to upgrade and see the changes :)
 
Hi, we are using DRBD with linstor. After upgrading our testcluster, we get: unsupported type 'drbd'... Now we can't access the storage from the vms.

Is anyone using linstor too, and can confirm?
 
Hi, we are using DRBD with linstor. After upgrading our testcluster, we get: unsupported type 'drbd'... Now we can't access the storage from the vms.
The DRBD/Linstore plugin is maintained by LinBit https://github.com/LINBIT/linstor-proxmox
It seems they did not yet update it to cope with the bumped storage API version (from 9 to 10), but they support API version 9 and that is forward compatible with API version 10.

So ensure you got the most resent version of that storage plugin and if you still have issues I'd recommend contacting LinBit.
 
  • Like
Reactions: nmmn
The releasenotes missing that you can now also have "notes" on datacenter level? or is this dev-repo only? Thanks for that "feature".
 
May I know more about the "asynchronous IO mode for each disk of a virtual machine" feature?

I can understand the advantages in terms of performance but are there any risks when it is enabled?

Thanks
 
May I know more about the "asynchronous IO mode for each disk of a virtual machine" feature?
This was always there but only in the CLI/Backend.

I can understand the advantages in terms of performance but are there any risks when it is enabled?
For some setups and storage technologies the new default since Proxmox VE 7 io_uring caused problems, as we actually found and fixed a kernel bug it should be quite less now, but it's still a relatively new technology (iow. "only" a few years, not decades old), so users can now switch easier to the old default of aio to see if that helps when observing similar symptoms.
In general, we're naturally determined to iron out any such issues when we can reproduce them and making any need to switch unnecessary in the first place, also there are issues with AIO and threads too, both AIO and io_uring are in the same ballpark here, in most setups, i.e., they'll run just fine, and io_uring is definitively the more performant (or at least actual async) model.

Anyhow, it can be still interesting for benchmarking or evaluation to see the performance impact of changing the IO submit mechanism.
 
Will this still work now?
Yes. The actual execution of the backup jobs stayed the same, a new vzdump process with the respective arguments is called - nothing changed here, just the way the Jobs configured from the web-interface/API are scheduled changed.

How can i setup a new scheduled job?
Simply add a new job via new web-interface (Datacenter -> Backup).
 
Yes. The actual execution of the backup jobs stayed the same, a new vzdump process with the respective arguments is called - nothing changed here, just the way the Jobs configured from the web-interface/API are scheduled changed.


Simply add a new job via new web-interface (Datacenter -> Backup).
Thank you for your help!
 
Amazing! Homelab will be busy this evening, testing out the latest and greatest stuff again. :D
If I understand correctly, "reassigning disk" means one can also attach a single virtual disk to multiple VMs concurrently?
 
If I understand correctly, "reassigning disk" means one can also attach a single virtual disk to multiple VMs concurrently?
No, it's more about moving a disk/volume to another VM/CT (on the same node) nicely with sanity checks and without editing configuration files or the like.
Using the same disk from multiple VMs is most of the time rather dangerous and unwanted as it needs a file system that can handle the concurrent access (e.g., GFS2), but it was always possible to do manually (config edit or CLI).
 
Good evening.

A very nice update, already in test in the lab!

I do have one question: I've tried adding a Yubikey to my root account as another 2FA system, but I'm getting an error :

no yubico otp configuration available for realm pam (500)

I've also tried it on a pve realm account, unsuccessfully. Is this intended behaviour?

Best Regards
 
I do have one question: I've tried adding a Yubikey to my root account as another 2FA system, but I'm getting an error :
Do you really want do add a YubiCloud linked key? As else you should use WebAuthn, most (all?) Yubikeys support it and its simpler as no third party service is required. You'd only need a trusted TLS certificate (e.g., through ACME).

Asking as the Yubico OTP cloud stuff is more rarely used, the initial config needs to be done in a more manual way..
 
  • Like
Reactions: Taledo
Do you really want do add a YubiCloud linked key? As else you should use WebAuthn, most (all?) Yubikeys support it and its simpler as no third party service is required. You'd only need a trusted TLS certificate (e.g., through ACME).

Asking as the Yubico OTP cloud stuff is more rarely used, the initial config needs to be done in a more manual way..

I'll look into webauthn then. I only have my self-signed TLS certificate though (I'm my own CA for the lab) , so I don't know about that.
 
I'll look into webauthn then. I only have my self-signed TLS certificate though (I'm my own CA for the lab) , so I don't know about that.
If you access PVE only from a limited set of clients you could add the CA to your clients certificate trust store.

With the ACME DNS challenge you can also get trusted certs for domains that resolve to a local address, where the host isn't accessible on the internet.
 
It seems to work, but I'm getting hit with a "The user verified even through discouragement" error . It might be windows hello acting up.

I'll try to figure it out. Thanks for your help!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!