Proxmox VE 6.3 available

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,741
223
We are really excited to announce the general availability of our virtualization management platform Proxmox VE 6.3. The most notable new feature is the integration of the stable version 1.0 of our new Proxmox Backup Server. We have strong encryption on the client-side and we have made creating and managing encryption keys for you very simple, providing multiple ways to store keys. VM backups are blazing fast with QEMU dirty-bitmaps.

This release also brings stable integration of Ceph Octopus 15.2.6, and you can now select your preferred Ceph version during the installation process. Many new Ceph-specific management features have been added to the GUI like for example displaying the recovery progress in the Ceph status panel or setting the placement groups (PGs) auto-scaling mode for each Ceph pool in the storage cluster. In general, we have added even more functionality and usability enhancements to the GUI.

Proxmox VE 6.3 is built on Debian Buster 10.6 but uses the latest long-term support Linux kernel (5.4), QEMU 5.1, LXC 4.0, Ceph 15.2, and ZFS 0.85.

Countless bug fixes and smaller improvements are included as well, see the full release notes.

Release notes
https://pve.proxmox.com/wiki/Roadmap

Video tutorial
https://www.proxmox.com/en/training/video-tutorials/

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
http://download.proxmox.com/iso

Documentation

https://pve.proxmox.com/pve-docs

Community Forum
https://forum.proxmox.com

Source Code
https://git.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

FAQ
Q: Can I dist-upgrade Proxmox VE 6.x to 6.3 with apt?
A: Yes, just via GUI or via CLI with apt update && apt dist-upgrade

Q: Can I install Proxmox VE 6.3 on top of Debian Buster?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster

Q: Can I upgrade my Proxmox VE 5.4 cluster with Ceph Luminous to 6.x and higher with Ceph Nautilus and even Ceph Octopus?
A: This is a three step process. First, you have to upgrade Proxmox VE from 5.4 to 6.3, and afterwards upgrade Ceph from Luminous to Nautilus. There are a lot of improvements and changes, please follow exactly the upgrade documentation.
https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus

Finally, do the upgrade to Ceph Octopus - https://pve.proxmox.com/wiki/Ceph_Nautilus_to_Octopus

Q: Where can I get more information about future feature updates?
A: Check our roadmap, forum, mailing lists, and subscribe to our newsletter.

A big THANK YOU to our active community for all your feedback, testing, bug reporting and patch submitting!
__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
I know in previous major upgrades its knocked out my e1000e network adapter or custom network configurations. Should i expect that to happen again with this upgrade?
 
Just upgraded a 3-node Ceph cluster and all went as expected (i.e. great!) All configs have stayed and network setup intact! :cool:
 
Just upgraded a 4-node cluster with shared SAS storage where volumes are offered to the nodes as GFS2 mountpoints - flawless since i wasnt able to upgrade kernel past some point as it would face a kernel panic ( see other post of mine ) - with the release of the 6.3.x version all is solved again.
So i'm a happy bunny :)
 
just upgraded a 6-node cluster with shared iSCSI storage, all seems works fine, just a warning during upgrade on all nodes:

Installing new version of config file /etc/frr/daemons ... Installing new version of config file /etc/frr/support_bundle_commands.conf ... Installing new version of config file /etc/logrotate.d/frr ... addgroup: The group `frrvty' already exists as a system group. Exiting. addgroup: The group `frr' already exists as a system group. Exiting. Warning: The home dir /nonexistent you specified can't be accessed: No such file or directory The system user `frr' already exists. Exiting.
 
just upgraded a 6-node cluster with shared iSCSI storage, all seems works fine, just a warning during upgrade on all nodes:

Installing new version of config file /etc/frr/daemons ... Installing new version of config file /etc/frr/support_bundle_commands.conf ... Installing new version of config file /etc/logrotate.d/frr ... addgroup: The group `frrvty' already exists as a system group. Exiting. addgroup: The group `frr' already exists as a system group. Exiting. Warning: The home dir /nonexistent you specified can't be accessed: No such file or directory The system user `frr' already exists. Exiting.
These are just warnings and of no concern.
 
I would like to report an issue that seems to be related to the latest kernel and HP BL460c G8 servers.

The boot sequence times out waiting for udev and ultimately the host remains without networking.

Here are some log lines that may be relevant:

Code:
Nov 29 17:02:13 pilu4 kernel: scsi host0: Emulex 10Gbe open-iscsi Initiator Driver
Nov 29 17:02:13 pilu4 kernel: scsi host0: BM_5694 : port online: 0x11
Nov 29 17:02:13 pilu4 kernel: scsi host0: BC_298 : MBX Cmd Completion timed out
Nov 29 17:02:13 pilu4 kernel: scsi host0: BG_1108 : MBX CMD get_boot_target Failed
Nov 29 17:04:14 pilu4 systemd[1]: systemd-udev-settle.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 17:03:14 pilu4 systemd-udevd[720]: 0000:05:00.2: Worker [775] processing SEQNUM=6200 is taking a long time
Nov 29 17:03:14 pilu4 systemd-udevd[720]: 0000:05:00.3: Worker [767] processing SEQNUM=6205 is taking a long time
Nov 29 17:03:15 pilu4 kernel: scsi host0: BC_298 : MBX Cmd Completion timed out
Nov 29 17:03:31 pilu4 kernel: INFO: task systemd-udevd:394 blocked for more than 120 seconds.
Nov 29 17:03:31 pilu4 kernel: Tainted: P O 5.4.73-1-pve #1
Nov 29 17:04:14 pilu4 systemd[1]: systemd-udev-settle.service: Failed with result 'exit-code'.
Nov 29 17:04:14 pilu4 systemd[1]: Failed to start udev Wait for Complete Device Initialization.
Nov 29 17:04:14 pilu4 systemd[1]: Failed to start Helper to synchronize boot up for ifupdown.
 
I would like to report an issue that seems to be related to the latest kernel and HP BL460c G8 servers.

The boot sequence times out waiting for udev and ultimately the host remains without networking.

Here are some log lines that may be relevant:

Code:
Nov 29 17:02:13 pilu4 kernel: scsi host0: Emulex 10Gbe open-iscsi Initiator Driver
Nov 29 17:02:13 pilu4 kernel: scsi host0: BM_5694 : port online: 0x11
Nov 29 17:02:13 pilu4 kernel: scsi host0: BC_298 : MBX Cmd Completion timed out
Nov 29 17:02:13 pilu4 kernel: scsi host0: BG_1108 : MBX CMD get_boot_target Failed
Nov 29 17:04:14 pilu4 systemd[1]: systemd-udev-settle.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 17:03:14 pilu4 systemd-udevd[720]: 0000:05:00.2: Worker [775] processing SEQNUM=6200 is taking a long time
Nov 29 17:03:14 pilu4 systemd-udevd[720]: 0000:05:00.3: Worker [767] processing SEQNUM=6205 is taking a long time
Nov 29 17:03:15 pilu4 kernel: scsi host0: BC_298 : MBX Cmd Completion timed out
Nov 29 17:03:31 pilu4 kernel: INFO: task systemd-udevd:394 blocked for more than 120 seconds.
Nov 29 17:03:31 pilu4 kernel: Tainted: P O 5.4.73-1-pve #1
Nov 29 17:04:14 pilu4 systemd[1]: systemd-udev-settle.service: Failed with result 'exit-code'.
Nov 29 17:04:14 pilu4 systemd[1]: Failed to start udev Wait for Complete Device Initialization.
Nov 29 17:04:14 pilu4 systemd[1]: Failed to start Helper to synchronize boot up for ifupdown.
do you known what device is on pci slot 0000:05:00 ? you should be able to workaround it with "systemctl mask ifupdown-pre.service"
 
Hello,
When i tried to upgrade to 6.3 i did a apt update && apt dist-upgrade i got:

Code:
root@proxmox:~# apt dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages have been kept back:
  zfs-initramfs zfs-zed
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.

Do i need to change something to the /etc/apt/sources.lists ?
 
do you known what device is on pci slot 0000:05:00 ? you should be able to workaround it with "systemctl mask ifupdown-pre.service"

Code:
05:00.0 Ethernet controller: Emulex Corporation OneConnect 10Gb NIC (be3) (rev 01)
05:00.1 Ethernet controller: Emulex Corporation OneConnect 10Gb NIC (be3) (rev 01)
05:00.2 Mass storage controller: Emulex Corporation OneConnect 10Gb iSCSI Initiator (be3) (rev 01)
05:00.3 Mass storage controller: Emulex Corporation OneConnect 10Gb iSCSI Initiator (be3) (rev 01)

I'd say it's the iSCSI initiator embedded in the Ethernet controller
 
I just ran a clean installation on my HP BL460C Gen 8 and it won't let me login into the GUI. Proxmox 6.2 works fine and also I did a upgrade from Proxmox 6.2 to 6.3 and same issue. I think maybe it doesn't see my Ethernet card. I will check.
 
I would like to report an issue that seems to be related to the latest kernel and HP BL460c G8 servers.

The boot sequence times out waiting for udev and ultimately the host remains without networking.

Here are some log lines that may be relevant:

Code:
Nov 29 17:02:13 pilu4 kernel: scsi host0: Emulex 10Gbe open-iscsi Initiator Driver
Nov 29 17:02:13 pilu4 kernel: scsi host0: BM_5694 : port online: 0x11
Nov 29 17:02:13 pilu4 kernel: scsi host0: BC_298 : MBX Cmd Completion timed out
Nov 29 17:02:13 pilu4 kernel: scsi host0: BG_1108 : MBX CMD get_boot_target Failed
Nov 29 17:04:14 pilu4 systemd[1]: systemd-udev-settle.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 17:03:14 pilu4 systemd-udevd[720]: 0000:05:00.2: Worker [775] processing SEQNUM=6200 is taking a long time
Nov 29 17:03:14 pilu4 systemd-udevd[720]: 0000:05:00.3: Worker [767] processing SEQNUM=6205 is taking a long time
Nov 29 17:03:15 pilu4 kernel: scsi host0: BC_298 : MBX Cmd Completion timed out
Nov 29 17:03:31 pilu4 kernel: INFO: task systemd-udevd:394 blocked for more than 120 seconds.
Nov 29 17:03:31 pilu4 kernel: Tainted: P O 5.4.73-1-pve #1
Nov 29 17:04:14 pilu4 systemd[1]: systemd-udev-settle.service: Failed with result 'exit-code'.
Nov 29 17:04:14 pilu4 systemd[1]: Failed to start udev Wait for Complete Device Initialization.
Nov 29 17:04:14 pilu4 systemd[1]: Failed to start Helper to synchronize boot up for ifupdown.



I got the same issue also.. I have a HPE BL460C G8 Blade too and it takes a long time to boot and the installation procedure also before it went to the installation screen. At the end I am unable to login into the GUI. It shows the IP address at the login but nothing..
 
Last edited:
I got the same issue also.. I have a HPE BL460C G8 Blade too and it takes a long time to boot and the installation procedure also before it went to the installation screen. At the end I am unable to login into the GUI. It shows the IP address at the login but nothing..
That sounds a bit like this thread. It was a regression in the kernel )the be2iscsi module to be specific), we found and fixed with upstream, there's a newer kenrel on pvetest available, see:
https://forum.proxmox.com/threads/h...5-4-73-5-4-78-kernel.79907/page-2#post-354354
can you please try that one?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!