Proxmox VE 8.3 released!

Making great progress on the OVA importer. Here are a few blockers I'm still seeing:
1. Synthetic disks are ignored. This is a disk defined in the OVF that has no file associated with it. Import should create a blank disk of the defined size.
2. For VMs with more than 2 IDE disks, only 2 IDE disks are imported. VMW uses IDE0:0,0:1,1:0,1:1 for IDE connectivity, PVE uses IDE0,IDE1,IDE2,IDE3. The importer isn't mapping them correctly and discarding the additional disks. In 8.2 it simply failed, so a little progress was made. If mapping IDE ports is too hard, leave the other disks in a detached state for now.
3. The SCSI controller seems to always come in as LSI 53C895A regardless of the controller called out in the ovf.

Other than that its the hard things, like missing OVF Environment vars preventing some complex OVAs from coming up, OVFs with multiple configuration options, or guests not recognizing the PVE variants of the LSI SAS or PVSCSI controllers.
 
We are excited to announce that our latest software version 8.3 for Proxmox Virtual Environment is now available for download. This release is based on Debian 12.8 "Bookworm" but uses a newer Linux kernel 6.8.12-4 and kernel 6.11 as opt-in, QEMU 9.0.2, LXC 6.0.0, and ZFS 2.2.6 (with compatibility patches for Kernel 6.11).

Proxmox VE 8.3 comes full of new features and highlights
  • Support for Ceph Reef and Ceph Squid
  • Tighter integration of the SDN stack with the firewall
  • New webhook notification target
  • New view type "Tag View" for the resource tree
  • New change detection modes for speeding up container backups to Proxmox Backup Server
  • More streamlined guest import from files in OVF and OVA
  • and much more
As always, we have included countless bugfixes and improvements on many places; see the release notes for all details.

Release notes
https://pve.proxmox.com/wiki/Roadmap

Press release
https://www.proxmox.com/en/news/press-releases

Video tutorial
https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-8-3

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
https://enterprise.proxmox.com/iso

Documentation
https://pve.proxmox.com/pve-docs

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

There has been a lot of feedback from our community members and customers, and many of you reported bugs, submitted patches and were involved in testing - THANK YOU for your support!

With this release we want to pay tribute to a special member of the community who unfortunately passed away too soon.
RIP tteck! tteck was a genuine community member and he helped a lot of users with his Proxmox VE Helper-Scripts. He will be missed. We want to express sincere condolences to his wife and family.

FAQ
Q: Can I upgrade latest Proxmox VE 7 to 8 with apt?
A: Yes, please follow the upgrade instructions on https://pve.proxmox.com/wiki/Upgrade_from_7_to_8

Q: Can I upgrade an 8.0 installation to the stable 8.3 via apt?
A: Yes, upgrading from is possible via apt and GUI.

Q: Can I install Proxmox VE 8.3 on top of Debian 12 "Bookworm"?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm

Q: Can I upgrade from with Ceph Reef to Ceph Squid?
A: Yes, see https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid

Q: Can I upgrade my Proxmox VE 7.4 cluster with Ceph Pacific to Proxmox VE 8.3 and to Ceph Reef?
A: This is a three-step process. First, you have to upgrade Ceph from Pacific to Quincy, and afterwards you can then upgrade Proxmox VE from 7.4 to 8.3. As soon as you run Proxmox VE 8.3, you can upgrade Ceph to Reef. There are a lot of improvements and changes, so please follow exactly the upgrade documentation:
https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy
https://pve.proxmox.com/wiki/Upgrade_from_7_to_8
https://pve.proxmox.com/wiki/Ceph_Quincy_to_Reef

Q: Where can I get more information about feature updates?
A: Check the roadmap, forum, the mailing list, and/or subscribe to our newsletter.
Proxmox-VE Free edition and Backup Server, How many CPU and RAM Supported. Thanks for Help.
 
Thank you for the new release! However, we’ve encountered an issue with iSCSI login after a reboot affecting more than 20 hosts. Since the update to version 8.3.0, only one volume is connected during automatic login, while a manual login connects all volumes. The parameter node.session.initial_login_retry_max was initially set to 8, and increasing it to 20 had no effect. After raising it to 128, it seems to work correctly. Could you confirm if this behavior is expected or if there’s an alternative solution for environments of this size?
 
Hi, is it normal that the
Code:
Setting up pve-manager (8.3.0) ...
takes very long? It takes longer than the other updates that I have done before.
 
OVA import is great, some feedback though. I can't seem to to change the disk type during import, I have to detach/attach afterwards. 1732221429532.png
 
Proxmox-VE Free edition and Backup Server, How many CPU and RAM Supported. Thanks for Help.
All our software with all features is AGPL licensed open source software - there are not limits on CPUs and RAM (it does not matter if you have a subscription or not)

I hope this helps!
 
  • Like
Reactions: Johannes S
Hello, thanks for this ! How can we propose some additions to the roadmap ? I'd like to suggest adding VDO support ;)
 
Hello, thanks for this ! How can we propose some additions to the roadmap ? I'd like to suggest adding VDO support ;)
If you think it's relevant for Proxmox VE, you can open an enhancement request over at https://bugzilla.proxmox.com/
Please add a bit of context (links to existing sites are fine enough) so that one that is not familiar with all the three-letter words our industry has, can more easily see what this is for.
 
  • Like
Reactions: tigerblue77
Everything seems fine. Nothing skipped a beat. Do I have to restart?
If a new kernel got pulled in with the upgrade then you might want to reboot the host sooner or later, but otherwise no, there's no need to reboot on updates of our user-space management stack.

Oh, and for a new QEMU update one can use either live-migration to an already updated host, if the nodes are in a cluster, or hibernate+resume, or a full restart of the VM to get it to run with the newer QEMU version. A reboot of the host, e.g. for a kernel update like mentioned before, will naturally also do the trick.
It didn't say so. I think it restarted the web UI because my console came disconnected right at the tail end of it, but I reconnected and checked for updates again and it says I'm all up to date.
Normally it even stays connected, but tbh., from top of my head I'm not 100% sure that there is no legit case (like some specific packages getting upgraded) where a disconnect of the console might happen.

So that's it? It's a simple non-event?
Yes, normally it should be.
 
Making great progress on the OVA importer. Here are a few blockers I'm still seeing:
1. Synthetic disks are ignored. This is a disk defined in the OVF that has no file associated with it. Import should create a blank disk of the defined size.
2. For VMs with more than 2 IDE disks, only 2 IDE disks are imported. VMW uses IDE0:0,0:1,1:0,1:1 for IDE connectivity, PVE uses IDE0,IDE1,IDE2,IDE3. The importer isn't mapping them correctly and discarding the additional disks. In 8.2 it simply failed, so a little progress was made. If mapping IDE ports is too hard, leave the other disks in a detached state for now.
3. The SCSI controller seems to always come in as LSI 53C895A regardless of the controller called out in the ovf.
Thanks for your feedback, do you mind opening an enhancement report over at https://bugzilla.proxmox.com/ with these points (would be fine to just do one report for all three points) and ideally also reference some sample OVA/OVFs that have these problems; that would make fixing them easier for us.
 
Thank you for the new release! However, we’ve encountered an issue with iSCSI login after a reboot affecting more than 20 hosts. Since the update to version 8.3.0, only one volume is connected during automatic login, while a manual login connects all volumes. The parameter node.session.initial_login_retry_max was initially set to 8, and increasing it to 20 had no effect. After raising it to 128, it seems to work correctly. Could you confirm if this behavior is expected or if there’s an alternative solution for environments of this size?
This sounds like it could be an unintended effect of [1], which adjusts node.session.initial_login_retry_max to decrease an overly high default login timeout. Can you please open a new thread and provide:
  • The content of /etc/pve/storage.cfg
  • The output of the following commands:
    Code:
    iscsiadm -m node
    iscsiadm -m session
    ls -al /dev/disk/by-path/*iscsi*
  • The output of:
    Code:
    journalctl -b -u open-iscsi -u iscsid -u pvestatd
  • How do you adjust the node.session.initial_login_retry_max?
  • Before upgrading to PVE 8.3, did you have any custom config set in /etc/iscsi/iscid.conf?
  • Can you clarify what you mean by "automatic login"? Do you mean the login to iSCSI storages performed by PVE on boot?
    Similarly, what do you mean by "manual login"? Do you mean manually running iscsiadm -m node --login?
Please mention me (@fweber) in the new thread.

[1] https://git.proxmox.com/?p=pve-storage.git;a=commit;h=e16c816f97a865a709bbd2d8c01e7b1551b0cc97
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!