Proxmox VE 6.0 released!

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,740
223
We're excited to announce the final release of our Proxmox VE 6.0! It's based on the great Debian 10 codename "Buster" and the latest 5.0 Linux kernel, QEMU 4.0, LXC 3.1.0, ZFS 0.8.1, Ceph 14.2, Corosync 3.0, and more.

This major release includes the latest Ceph Nautilus feautures and an improved Ceph management dashboard. We have updated the cluster communication stack to Corosync 3 using Kronosnet, and have a new selection widget for the network making it simple to select the correct link address in the cluster creation wizard.

With ZFS 0.8.1 we have included TRIM support for SSDs and also support for native encryption with comfortable key-handling.

The new installer supports ZFS root via UEFI, for example you can boot a ZFS mirror on NVMe SSDs (using systemd-boot instead of grub).

And as always we have included countless bugfixes and improvements on a lot of places; see the release notes for all details.

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.0

Video intro
https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-6-0

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
http://download.proxmox.com/iso/

Documentation
https://pve.proxmox.com/pve-docs/

Community Forum
https://forum.proxmox.com

Source Code
https://git.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

FAQ
Q: Can I dist-upgrade Proxmox VE 5.4 to 6.0 with apt?

A: Please follow the upgrade instructions exactly, as there is a major version bump of corosync (2.x to 3.x)
https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0

Q: Can I install Proxmox VE 6.0 on top of Debian Buster?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster

Q: Can I upgrade my Proxmox VE 5.4 cluster with Ceph Luminous to 6.0 with Ceph Nautilus?
A: This is a two step process. First, you have to upgrade Proxmox VE from 5.4 to 6.0, and afterwards upgrade Ceph from Luminous to Nautilus. There are a lot of improvements and changes, please follow exactly the upgrade documentation.
https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus

Q: Where can I get more information about future feature updates?
A: Check our roadmap, forum, mailing list and subscribe to our newsletter.

A big THANK YOU to our active community for all your feedback, testing, bug reporting and patch submitting!

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
That was fast! so soon after the beta was released, Nice!

Changing over from beta to release was as easy as editing 1 apt line and apt update/upgrade :)
 
Just finished installing on a server with no errors. Booted successfully, SSH console accessible but NO web interface. How do I start it>
 
Just finished installing on a server with no errors. Booted successfully, SSH console accessible but NO web interface. How do I start it>
It should start by default. Check if the pveproxy service is running.
 
Apologies. It works on htps:// <host-ip> :8006. I would be more specific in the message given just before approving reboot.

There's also the "issue" message visible after boot to make this a bit more clear, for example:
Welcome to the Proxmox Virtual Environment. Please use your web browser to
configure this server - connect to:

https://192.168.16.38:8006/

But I guess clarifying it an additional time on successful installation does not hurts.
 
There's also the "issue" message visible after boot to make this a bit more clear, for example:


But I guess clarifying it an additional time on successful installation does not hurts.

I guess so. It happened to me because I have the console on one PC on one lan and the management interface running on another pc on another lan.

The thing is that the message given on successful completion of the installation is misleading (something like, after reboot connect with your browser to the management interface)... without mentioning the port 8006.... that's all. Just a small thing.
 
when I update it from 6.0 beta(6.0-1) with "apt update", it will show below err
Code:
Hit:1 http://security.debian.org buster/updates InRelease
Hit:2 http://ftp.debian.org/debian buster InRelease
Err:3 https://enterprise.proxmox.com/debian/pve buster InRelease
  401  Unauthorized [IP: 94.136.30.185 443]
Hit:4 http://ftp.debian.org/debian buster-updates InRelease
Reading package lists... Done
E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/buster/InRelease  401  Unauthorized [IP: 94.136.30.185 443]
E: The repository 'https://enterprise.proxmox.com/debian/pve buster InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

And could not upgrade to the released version.
Code:
root@pve:/# apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@pve:/# apt-get dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@pve:/#
 
Last edited:
The thing is that the message given on successful completion of the installation is misleading (something like, after reboot connect with your browser to the management interface)... without mentioning the port 8006.... that's all. Just a small thing.
No worries, makes sense to have, I've a patch ready which would results in: installer-success.png
For this release to late, but maybe for the next one :)
 
Err:3 https://enterprise.proxmox.com/debian/pve buster InRelease 401 Unauthorized [IP: 94.136.30.185 443] Hit:4 http://ftp.debian.org/debian buster-updates InRelease Reading package lists... Done E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/buster/InRelease 401 Unauthorized [IP: 94.136.30.185 443] E: The repository 'https://enterprise.proxmox.com/debian/pve buster InRelease' is not signed. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details.

You have no subscription for the enterprise repository but also no non-enterprise repository set up, see: https://pve.proxmox.com/wiki/Package_Repositories
 
For users upgrading from PVE 6.0 Beta to PVE 6.0 using a redundant ZFS rpool:

the mechanism used to synchronize the EFI System Partitions ("ESP") storing the actual kernels and initrds which are available for booting has changed from the Beta ISO to the final ISO.

If your system is using ZFS for the root file system and has been setup with the PVE 6.0 Beta ISO, please check the output of "pve-efiboot-tool refresh". If it prints the following error message:

Code:
# pve-efiboot-tool refresh
Running hook script 'pve-auto-removal'..
Running hook script 'zz-pve-efiboot'..
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.

You need to initialize each ESP with pve-efiboot-tool. To verify which partitions contain an ESP, use "lsblk" on each disk making up your rpool (replace '/dev/sd?' with appropriate paths if your devices are named differently):
Code:
# lsblk -o path,parttype,fstype /dev/sd?
PATH      PARTTYPE                             FSTYPE
/dev/sda                                       zfs_member
/dev/sda1 21686148-6449-6e6f-744e-656564454649 zfs_member
/dev/sda2 c12a7328-f81f-11d2-ba4b-00a0c93ec93b vfat
/dev/sda3 6a898cc3-1dd2-11b2-99a6-080020736631 zfs_member

vfat partitions with partition type "c12a7328-f81f-11d2-ba4b-00a0c93ec93b" and index 2 are ESPs setup by the installer. For each of those partitions (e.g., /dev/sda2, /dev/sdb2, /dev/sd2), use the following command to re-initialize the ESP and register it for ongoing synchronization:
Code:
# pve-efiboot-tool init /dev/sda2
Re-executing '/usr/sbin/pve-efiboot-tool' in new private mount namespace..
UUID="8387-3C66" SIZE="536870912" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sda" MOUNTPOINT=""
Mounting '/dev/sda2' on '/var/tmp/espmounts/8387-3C66'.
Installing systemd-boot..
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/8387-3C66/EFI/systemd/systemd-bootx64.efi".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/8387-3C66/EFI/BOOT/BOOTX64.EFI".
Created EFI boot entry "Linux Boot Manager".
Configuring systemd-boot..
Unmounting '/dev/sda2'.
Adding '/dev/sda2' to list of synced ESPs..
Refreshing kernels and initrds..
Running hook script 'pve-auto-removal'..
Running hook script 'zz-pve-efiboot'..
No /etc/kernel/cmdline found - falling back to /proc/cmdline
Copying and configuring kernels on /dev/disk/by-uuid/8387-3C66
        Copying kernel and creating boot-entry for 4.15.18-18-pve
        Copying kernel and creating boot-entry for 5.0.12-1-pve
        Copying kernel and creating boot-entry for 5.0.15-1-pve

See our pve-admin-guide for more detailed information about our new bootloader setup and pve-efiboot-tool.
 
For users upgrading from PVE 6.0 Beta to PVE 6.0 using a redundant ZFS rpool:

the mechanism used to synchronize the EFI System Partitions ("ESP") storing the actual kernels and initrds which are available for booting has changed from the Beta ISO to the final ISO.

If your system is using ZFS for the root file system and has been setup with the PVE 6.0 Beta ISO, please check the output of "pve-efiboot-tool refresh". If it prints the following error message:

Code:
# pve-efiboot-tool refresh
Running hook script 'pve-auto-removal'..
Running hook script 'zz-pve-efiboot'..
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.

You need to initialize each ESP with pve-efiboot-tool. To verify which partitions contain an ESP, use "lsblk" on each disk making up your rpool (replace '/dev/sd?' with appropriate paths if your devices are named differently):
Code:
# lsblk -o path,parttype,fstype /dev/sd?
PATH      PARTTYPE                             FSTYPE
/dev/sda                                       zfs_member
/dev/sda1 21686148-6449-6e6f-744e-656564454649 zfs_member
/dev/sda2 c12a7328-f81f-11d2-ba4b-00a0c93ec93b vfat
/dev/sda3 6a898cc3-1dd2-11b2-99a6-080020736631 zfs_member

vfat partitions with partition type "c12a7328-f81f-11d2-ba4b-00a0c93ec93b" and index 2 are ESPs setup by the installer. For each of those partitions (e.g., /dev/sda2, /dev/sdb2, /dev/sd2), use the following command to re-initialize the ESP and register it for ongoing synchronization:
Code:
# pve-efiboot-tool init /dev/sda2
Re-executing '/usr/sbin/pve-efiboot-tool' in new private mount namespace..
UUID="8387-3C66" SIZE="536870912" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sda" MOUNTPOINT=""
Mounting '/dev/sda2' on '/var/tmp/espmounts/8387-3C66'.
Installing systemd-boot..
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/8387-3C66/EFI/systemd/systemd-bootx64.efi".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/8387-3C66/EFI/BOOT/BOOTX64.EFI".
Created EFI boot entry "Linux Boot Manager".
Configuring systemd-boot..
Unmounting '/dev/sda2'.
Adding '/dev/sda2' to list of synced ESPs..
Refreshing kernels and initrds..
Running hook script 'pve-auto-removal'..
Running hook script 'zz-pve-efiboot'..
No /etc/kernel/cmdline found - falling back to /proc/cmdline
Copying and configuring kernels on /dev/disk/by-uuid/8387-3C66
        Copying kernel and creating boot-entry for 4.15.18-18-pve
        Copying kernel and creating boot-entry for 5.0.12-1-pve
        Copying kernel and creating boot-entry for 5.0.15-1-pve

See our pve-admin-guide for more detailed information about our new bootloader setup and pve-efiboot-tool.


All done, i needed to change this (but an earlier reboot went fine thou)

Mind you that for an nvme drive this command obviously changes to something like:
Code:
lsblk -o path,parttype,fstype /dev/nvme0n?
 
FAIL: Resolved node IP '192.168.30.2' not configured or active for 'pve'

This is the only error i cant figure out? my host is 30.2 and my proxmox is 30.3 but what do i need to configure? i dont get it. I dont use cluster or ceph.
 
FAIL: Resolved node IP '192.168.30.2' not configured or active for 'pve'

This is the only error i cant figure out? my host is 30.2 and my proxmox is 30.3 but what do i need to configure? i dont get it. I dont use cluster or ceph.

this means your hostname and local network setting are in disagreement. your hostname should resolve to one of your local IP addresses. either you have an old/outdated/wrong IP in /etc/hosts, or your DNS responds for your hostname with wrong information.
 
Just upgraded my server running 5.4, ZFS and a four disk setup with two striped mirrors, booting from the first mirror.
Hardware is a HP Microserver Gen8 with disk controller in AHCI SATA mode.
So for, every 5.4 kernel update had worked and the system would be able to reboot.

Now, after upgrading to 6.0 and reboot, the system crashes with

Code:
Attempting Boot From Hard Drive (C:)
error: no such device: (uuid)
error: unknow filesystem.
Entering rescue mode...
grub rescue>

I know this was mentioned in another thread, but this is the first time it happens and must be related to the upgrade to PVE 6.0.
Enclosed is the protocol of the upgrade...

Any idea of what might have gone wrong?
 

Attachments

  • micro-berlin-upgrade.txt
    365.9 KB · Views: 15
Question about key rotation every 24h
In the changelog we can read : Automatic rotation of authentication key every 24h: by limiting the key lifetime to 24h the impact of key leakage or a malicious administrator are reduced.
It seems to be in relation with this commit/diff : https://git.proxmox.com/?p=pve-acce...ff;h=243262f1853e94bd02d0614a1ae76442ec1e85e9

I do not see what it involves... API, corosync, PVE cluster FS or cluster management ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!