Proxmox VE 8.3 released!

Making great progress on the OVA importer. Here are a few blockers I'm still seeing:
1. Synthetic disks are ignored. This is a disk defined in the OVF that has no file associated with it. Import should create a blank disk of the defined size.
2. For VMs with more than 2 IDE disks, only 2 IDE disks are imported. VMW uses IDE0:0,0:1,1:0,1:1 for IDE connectivity, PVE uses IDE0,IDE1,IDE2,IDE3. The importer isn't mapping them correctly and discarding the additional disks. In 8.2 it simply failed, so a little progress was made. If mapping IDE ports is too hard, leave the other disks in a detached state for now.
3. The SCSI controller seems to always come in as LSI 53C895A regardless of the controller called out in the ovf.

Other than that its the hard things, like missing OVF Environment vars preventing some complex OVAs from coming up, OVFs with multiple configuration options, or guests not recognizing the PVE variants of the LSI SAS or PVSCSI controllers.
 
We are excited to announce that our latest software version 8.3 for Proxmox Virtual Environment is now available for download. This release is based on Debian 12.8 "Bookworm" but uses a newer Linux kernel 6.8.12-4 and kernel 6.11 as opt-in, QEMU 9.0.2, LXC 6.0.0, and ZFS 2.2.6 (with compatibility patches for Kernel 6.11).

Proxmox VE 8.3 comes full of new features and highlights
  • Support for Ceph Reef and Ceph Squid
  • Tighter integration of the SDN stack with the firewall
  • New webhook notification target
  • New view type "Tag View" for the resource tree
  • New change detection modes for speeding up container backups to Proxmox Backup Server
  • More streamlined guest import from files in OVF and OVA
  • and much more
As always, we have included countless bugfixes and improvements on many places; see the release notes for all details.

Release notes
https://pve.proxmox.com/wiki/Roadmap

Press release
https://www.proxmox.com/en/news/press-releases

Video tutorial
https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-8-3

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
https://enterprise.proxmox.com/iso

Documentation
https://pve.proxmox.com/pve-docs

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

There has been a lot of feedback from our community members and customers, and many of you reported bugs, submitted patches and were involved in testing - THANK YOU for your support!

With this release we want to pay tribute to a special member of the community who unfortunately passed away too soon.
RIP tteck! tteck was a genuine community member and he helped a lot of users with his Proxmox VE Helper-Scripts. He will be missed. We want to express sincere condolences to his wife and family.

FAQ
Q: Can I upgrade latest Proxmox VE 7 to 8 with apt?
A: Yes, please follow the upgrade instructions on https://pve.proxmox.com/wiki/Upgrade_from_7_to_8

Q: Can I upgrade an 8.0 installation to the stable 8.3 via apt?
A: Yes, upgrading from is possible via apt and GUI.

Q: Can I install Proxmox VE 8.3 on top of Debian 12 "Bookworm"?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm

Q: Can I upgrade from with Ceph Reef to Ceph Squid?
A: Yes, see https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid

Q: Can I upgrade my Proxmox VE 7.4 cluster with Ceph Pacific to Proxmox VE 8.3 and to Ceph Reef?
A: This is a three-step process. First, you have to upgrade Ceph from Pacific to Quincy, and afterwards you can then upgrade Proxmox VE from 7.4 to 8.3. As soon as you run Proxmox VE 8.3, you can upgrade Ceph to Reef. There are a lot of improvements and changes, so please follow exactly the upgrade documentation:
https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy
https://pve.proxmox.com/wiki/Upgrade_from_7_to_8
https://pve.proxmox.com/wiki/Ceph_Quincy_to_Reef

Q: Where can I get more information about feature updates?
A: Check the roadmap, forum, the mailing list, and/or subscribe to our newsletter.
Proxmox-VE Free edition and Backup Server, How many CPU and RAM Supported. Thanks for Help.
 
Thank you for the new release! However, we’ve encountered an issue with iSCSI login after a reboot affecting more than 20 hosts. Since the update to version 8.3.0, only one volume is connected during automatic login, while a manual login connects all volumes. The parameter node.session.initial_login_retry_max was initially set to 8, and increasing it to 20 had no effect. After raising it to 128, it seems to work correctly. Could you confirm if this behavior is expected or if there’s an alternative solution for environments of this size?
 
Hi, is it normal that the
Code:
Setting up pve-manager (8.3.0) ...
takes very long? It takes longer than the other updates that I have done before.
 
OVA import is great, some feedback though. I can't seem to to change the disk type during import, I have to detach/attach afterwards. 1732221429532.png
 
Proxmox-VE Free edition and Backup Server, How many CPU and RAM Supported. Thanks for Help.
All our software with all features is AGPL licensed open source software - there are not limits on CPUs and RAM (it does not matter if you have a subscription or not)

I hope this helps!
 
  • Like
Reactions: Johannes S
Hello, thanks for this ! How can we propose some additions to the roadmap ? I'd like to suggest adding VDO support ;)
 
Hello, thanks for this ! How can we propose some additions to the roadmap ? I'd like to suggest adding VDO support ;)
If you think it's relevant for Proxmox VE, you can open an enhancement request over at https://bugzilla.proxmox.com/
Please add a bit of context (links to existing sites are fine enough) so that one that is not familiar with all the three-letter words our industry has, can more easily see what this is for.
 
  • Like
Reactions: tigerblue77
Everything seems fine. Nothing skipped a beat. Do I have to restart?
If a new kernel got pulled in with the upgrade then you might want to reboot the host sooner or later, but otherwise no, there's no need to reboot on updates of our user-space management stack.

Oh, and for a new QEMU update one can use either live-migration to an already updated host, if the nodes are in a cluster, or hibernate+resume, or a full restart of the VM to get it to run with the newer QEMU version. A reboot of the host, e.g. for a kernel update like mentioned before, will naturally also do the trick.
It didn't say so. I think it restarted the web UI because my console came disconnected right at the tail end of it, but I reconnected and checked for updates again and it says I'm all up to date.
Normally it even stays connected, but tbh., from top of my head I'm not 100% sure that there is no legit case (like some specific packages getting upgraded) where a disconnect of the console might happen.

So that's it? It's a simple non-event?
Yes, normally it should be.
 
  • Like
Reactions: Johannes S
Making great progress on the OVA importer. Here are a few blockers I'm still seeing:
1. Synthetic disks are ignored. This is a disk defined in the OVF that has no file associated with it. Import should create a blank disk of the defined size.
2. For VMs with more than 2 IDE disks, only 2 IDE disks are imported. VMW uses IDE0:0,0:1,1:0,1:1 for IDE connectivity, PVE uses IDE0,IDE1,IDE2,IDE3. The importer isn't mapping them correctly and discarding the additional disks. In 8.2 it simply failed, so a little progress was made. If mapping IDE ports is too hard, leave the other disks in a detached state for now.
3. The SCSI controller seems to always come in as LSI 53C895A regardless of the controller called out in the ovf.
Thanks for your feedback, do you mind opening an enhancement report over at https://bugzilla.proxmox.com/ with these points (would be fine to just do one report for all three points) and ideally also reference some sample OVA/OVFs that have these problems; that would make fixing them easier for us.
 
  • Like
Reactions: lusid1
Thank you for the new release! However, we’ve encountered an issue with iSCSI login after a reboot affecting more than 20 hosts. Since the update to version 8.3.0, only one volume is connected during automatic login, while a manual login connects all volumes. The parameter node.session.initial_login_retry_max was initially set to 8, and increasing it to 20 had no effect. After raising it to 128, it seems to work correctly. Could you confirm if this behavior is expected or if there’s an alternative solution for environments of this size?
This sounds like it could be an unintended effect of [1], which adjusts node.session.initial_login_retry_max to decrease an overly high default login timeout. Can you please open a new thread and provide:
  • The content of /etc/pve/storage.cfg
  • The output of the following commands:
    Code:
    iscsiadm -m node
    iscsiadm -m session
    ls -al /dev/disk/by-path/*iscsi*
  • The output of:
    Code:
    journalctl -b -u open-iscsi -u iscsid -u pvestatd
  • How do you adjust the node.session.initial_login_retry_max?
  • Before upgrading to PVE 8.3, did you have any custom config set in /etc/iscsi/iscid.conf?
  • Can you clarify what you mean by "automatic login"? Do you mean the login to iSCSI storages performed by PVE on boot?
    Similarly, what do you mean by "manual login"? Do you mean manually running iscsiadm -m node --login?
Please mention me (@fweber) in the new thread.

[1] https://git.proxmox.com/?p=pve-storage.git;a=commit;h=e16c816f97a865a709bbd2d8c01e7b1551b0cc97
 
Hi! New Proxmox user here (and LOVING IT). In my home environment I just have one Proxmox server that has a few vms and containers on it. I ran the update and looks like it took me from 8.2 to 8.3.

Everything seems fine. Nothing skipped a beat. Do I have to restart? It didn't say so. I think it restarted the web UI because my console came disconnected right at the tail end of it, but I reconnected and checked for updates again and it says I'm all up to date. So that's it? It's a simple non-event?

This is a user who's used vmware professionally for 14+ years and used to a maintenance mode > reboot.
You need to reboot for the new Kernel to be used, but yes it is that easy ;)
 
  • Like
Reactions: Johannes S
  • New change detection modes for speeding up container backups to Proxmox Backup Server

Is there detailed information on this improvement? What gain can we expect?
In the case of slow storage (NFS), will this improve speed as well?
 
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_ct_change_detection_mode

In our tests it was a fraction of the old option. E.g. ~10min -> ~3min on one of my personal hosts. In other situations, the gain might be even more.
This is awesome.

If we're already running a PBS+PVE setup, do we need to make adjustments to get the performance boost?

I've read the docs the release notes, and I'm a bit confused.
  • It looks like Default is the way it worked before?
  • Data uses the new pxar v. 2 format, and separates out data and metadata into separate streams, which makes sense; but then
  • Metadata is does the same thing as Data but uses previous CT snapshots (if they exist) to track changes.
Is that right?

If so, question: Why choose Data over Metadata? It seems like Metadata would always be the most compute/space efficient option. What am I missing?

Separate question:
How does changing this option away from default impact existing backups of CTs?
 
  • Like
Reactions: cfgmgr
I am experiencing an issue with the drivers (ixgbe) for the Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 network card. On Proxmox version 8.3, the drivers for this card are not detected, whereas they work perfectly on version proxmox 7.4.
 
If we're already running a PBS+PVE setup, do we need to make adjustments to get the performance boost?
Yes, you need to switch the change-detection mode from backup jobs over to metadata (Edit Job -> Advanced -> PBS change detection mode)
It looks like Default is the way it worked before?
Yes, it's the legacy method to create the pxar and still the default, we had the new variant for about six months available and only dropped the experimental flag with this release. Note that most, IIRC even all, changes were done on the client side.
  • Data uses the new pxar v. 2 format, and separates out data and metadata into separate streams, which makes sense; but then
  • Metadata is does the same thing as Data but uses previous CT snapshots (if they exist) to track changes.
Is that right?
These two modes always result in the same new format, the option is not named a "PBS pxar format", which would only have two options, but rather "change-mode detection".

The reasoning here is that for the new more efficient change modes we needed fast comparison of metadata with the previous snapshot. The old way to save things did not allow to just query the metadata without not also reading through basically all actual data, that's why the split was done.
If so, question: Why choose Data over Metadata? It seems like Metadata would always be the most compute/space efficient option. What am I missing?
In terms of backup result there should be no difference, after all backups are full ones, the data chunks that make up a backup are just shared between the different backup snapshots. The space usage on the PBS site should not differ in a significant amount. The performance win comes from being able to do less (read) IO on the backup source site to compute what data needs to be backed up. I.e., The difference is how the Proxmox Backup Client detects change in files to determine what new data chunks it needs to send to the server to cope with any new files or modified files.

Metadata compares if metadata changed, i.e. file size, file type, inode, permissions and so on. But metadata is not a 100% secure indicator, as one can e.g. change a file content but still keep the same size and permissions and then manually reset the timestamp that the file was adapted too, this is unlikely, especially in practice, but in short one might be able to construct cases where the metadata stays the same while the data actually changed. The metadata change-mode will not detect this, the data does, but naturally at the cost of having to read everything again like the legacy mode.

The new data change-mode still exists as it creates the new format with the old detection. And one can use it to switch between the two modern change-modes without additional impact. E.g., one could create a backup job with a higher frequency, say hourly, using the metadata mode to avoid lots of read IO and have a quicker backup and then create a second job that with a lower frequency, e.g. daily, that reads everything to decide what needs to be backed up. But I would not say this is recommended, for most common use cases the metadata change-detection mode is safe. Tools resetting metadata like change/modification times while changing the file's content are not really having a use case – but we still let the user here decide what's best for them.
Separate question: How does changing this option away from default impact existing backups of CTs?
Existing backups won't be touched at all, they stay valid and can always be restored. As said above in this reply, on the PBS all backups are full ones, it never needs to apply a set of diff to get the actual backup state like some other solutions.
 
I am experiencing an issue with the drivers (ixgbe) for the Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 network card. On Proxmox version 8.3, the drivers for this card are not detected, whereas they work perfectly on version proxmox 7.4.
You mean with the kernel module? FWIW, we got X540-AT2 in some of our test hardware, and it works fine there. E.g., I got a long-term three node test cluster that each has such a NIC, and it's configured as ceph cluster full-mesh network. Two nodes use the 6.8 kernel and one node uses the 6.11 kernel; the NIC work fine both kernels.

Maybe open a new thread and describe the rest of the hardware and the problems you're seeing.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!