Proxmox VE 6.0 beta released!

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,740
223
We're happy to announce the first beta release for the Proxmox VE 6.x family! It's based on the great Debian Buster (Debian 10) and a 5.0 kernel, QEMU 4.0, ZFS 0.8.1, Ceph 14.2.1, Corosync 3.0 and countless improvements and bugfixes.

The new installer supports ZFS root via UEFI, for example you can boot a ZFS mirror on NVMe SSDs (using systemd-boot instead of grub). The full release notes will be available together with the final release announcement.

This Proxmox VE release is a beta version. If you test or upgrade, make sure to create backups of your data first.

Download
http://download.proxmox.com/iso/

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

FAQ
Q: Can I upgrade a 6.0 beta installation to the stable 6.0 release via apt?

A: Yes, upgrading from beta to stable installation will be possible via apt.

Q: Which apt repository can I use for Proxmox VE 6.0 beta?
Code:
deb http://download.proxmox.com/debian/pve buster pvetest

Q: Can I install Proxmox VE 6.0 beta on top of Debian Buster?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster

Q: Can I dist-upgrade Proxmox VE 5.4 to 6.0 beta with apt?
A: Please follow the upgrade instructions exactly, as there is a major version bump of corosync (2.x to 3.x)
https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0

Q: Can I upgrade my Proxmox VE 5.4 cluster with Ceph Luminous to 6.0 beta with Ceph Nautilus?
A: This is a two step process. First, you have to upgrade Proxmox VE from 5.4 to 6.0, and afterwards upgrade Ceph from Luminous to Nautilus. There are a lot of improvements and changes, so please follow exactly the upgrade documentation.
https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus

Q: When do you expect the stable Proxmox VE 6.0 release?

A: The final Proxmox VE 6.0 will be available as soon as Debian Buster is stable and all Proxmox VE 6.0 release critical bugs are fixed.

Q: Where can I get more information about feature updates?
A: Check our roadmap, forum, mailing list and subscribe to our newsletter.

We invite you to test your hardware and your upgrade path and we are thankful for receiving your feedback, ideas, and bug reports.

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
a consideration about pve5to6:

FAIL: ring0_addr 'srv3' of node 'srv3' is not an IP address, consider replacing it with the currently resolved IP address.

is it correct? i always used hostname (resolved by system and dns) instead using private ip
 
is it correct? i always used hostname (resolved by system and dns) instead using private ip

From the Proxmox VE Administration Guide: "You may use plain IP addresses or also hostnames here. If you use hostnames ensure that they are resolvable from all nodes." This is also applicable when upgrading.
 
...ok....and why pve5to6 show "FAIL"? All my hostnames are resolvable.

We just had really often user run into so much issues with this, /etc/hosts files which forgot to update, or a address change which forgot that corosync cluster network used that too, and could have bad implications, or update it on node leave, join, re-join... So a hard notice seemed justified, especially as corosync-3 does not has the same auto-discovery feature as corosync 2 with multicast had, if the address was missing there it normally still worked (as it could find the other nodes by deriving the multicast group address from the clustername) - with the new that won't fly... Might re-evaluate, though, Dominic sent a patch lets look what the other (devs) think..
 
I seem to be running into some major problems during upgrade. The udev and/or systemd crash upon restarting udev after upgrade:

When the upgrade proces comes to udev:

-----
Setting up udev (241-5) ...

Configuration file '/etc/init.d/udev'
==> Modified (by you or by a script) since installation.
==> Package distributor has shipped an updated version.

What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
N or O : keep your currently-installed version
D : show the differences between the versions
Z : start a shell to examine the situation
The default action is to keep your current version.

*** udev (Y/I/N/O/D/Z) [default=N] ? Y

Installing new version of config file /etc/init.d/udev ...
Installing new version of config file /etc/udev/udev.conf ...
Job for systemd-udevd.service failed.
See "systemctl status systemd-udevd.service" and "journalctl -xe" for details.

invoke-rc.d: initscript udev, action "restart" failed.

● systemd-udevd.service - udev Kernel Device Manager
Loaded: loaded (/lib/systemd/system/systemd-udevd.service; static; vendor preset: enabled)
Drop-In: /etc/systemd/system/systemd-udevd.service.d
└─override.conf
Active: activating (start) since Fri 2019-07-05 15:06:05 CEST; 10ms ago
Docs: man:systemd-udevd.service(8)
man:udev(7)
Main PID: 24312 (systemd-udevd)
Tasks: 1
Memory: 856.0K
CGroup: /system.slice/systemd-udevd.service
└─24312 /lib/systemd/systemd-udevd

Jul 05 15:06:05 test01 systemd[1]: Starting udev Kernel Device Manager...
dpkg: error processing package udev (--configure):
installed udev package post-installation script subprocess returned error exit status 1

Message from syslogd@test01 at Jul 5 15:06:05 ...
kernel:[ 751.238697] systemd[1]: segfault at 50 ip 0000565475637f00 sp 00007ffe62628910 error 4 in systemd[5654755dd000+b1000]

Broadcast message from systemd-journald@test01 (Fri 2019-07-05 15:06:05 CEST):
systemd[1]: Caught <SEGV>, dumped core as pid 24359.

Message from syslogd@test01 at Jul 5 15:06:05 ...
systemd[1]: Caught <SEGV>, dumped core as pid 24359.

Message from syslogd@test01 at Jul 5 15:06:05 ...
systemd[1]: Freezing execution.

Broadcast message from systemd-journald@test01 (Fri 2019-07-05 15:06:05 CEST):

systemd[1]: Freezing execution.

Failed to reload daemon: Connection reset by peer
Failed to retrieve unit state: Failed to activate service 'org.freedesktop.systemd1': timed out

-----

From then on the system becomes unresponsive and every systemctl command fails.
The system is also unbootable after reboot.

I re-did the whole procedure and answered "N" at the question about the udev config files being altered.
Then everything seems to be ok (upgrade finishes) but then again after a reboot the system is unbootable since systemd-udevd.service cannot start.

Any hint on what might be going on here?
 
This seems rather odd (segfaults in core binaries usually makes me think about broken RAM).
Please open a separate thread, since this seems a more involved issue, and provide the following:
* the diff of the shipped udev init-script and the one you have on your system (did you modify it)?
* what is written in the systemd-override-conf for udev ('/etc/systemd/system/systemd-udevd.service.d/override.conf')?
* the output of `dmesg`
* the journal since the last boot `journalctl -b`

Thanks!
 
Good Job. It is missing just ceph mgr2 dashboard on port 8443 for now. Thank's.
 

Attachments

  • Screenshot_2019-07-05 pve1 - Proxmox Virtual Environment.png
    Screenshot_2019-07-05 pve1 - Proxmox Virtual Environment.png
    128.3 KB · Views: 79
  • Screenshot_2019-07-05 pve1 - Proxmox Virtual Environment(1).png
    Screenshot_2019-07-05 pve1 - Proxmox Virtual Environment(1).png
    140.4 KB · Views: 70
  • Screenshot_2019-07-05 pve1 - Proxmox Virtual Environment(2).png
    Screenshot_2019-07-05 pve1 - Proxmox Virtual Environment(2).png
    161.2 KB · Views: 71
  • Screenshot_2019-07-05 pve1 - Proxmox Virtual Environment(3).png
    Screenshot_2019-07-05 pve1 - Proxmox Virtual Environment(3).png
    146.1 KB · Views: 63
  • Like
Reactions: psionic
Ohhh yes what an exciting thing to wake up to :D Going to update on a test node today. Thank you team, for all your awesome work.

Very much looking forward to the updates!

From what I understand the full release notes will be shown once the stable build is done, but would you be able to give any teasers about some of the new Ceph Nautilus features being rolled into this build? ;) Will some of the shiny new stuff added in Nautilus make their way to the Proxmox management GUI, such as predictive RADOS storage device device health/lifespan?
 
Tried to install it on my new EX62 from Hetzner, got "Installation aborted - cannot continue" after DHCP, without any more information. Any idea what the issue could be or how i can find out?
 
Tried to install it on my new EX62 from Hetzner, got "Installation aborted - cannot continue" after DHCP, without any more information. Any idea what the issue could be or how i can find out?

Are you using the ISO to install?
Try alt+F2 to switch to anouther TTY session, mybe you can see something there.
 
Sorry, yes i'm trying to install it via the iso. Unfortunately your tip did not give any more information.
I assume there still is an issue to install on a system with only two NVMEs with UEFI? The installer seems to come up via legacy mode. (Unfortunately that SMB Share installation via LARA is extremely slow, so i don't know yet if that will work, wanted to have an UEFI installation though, given the changelog says its now supported)
 
Okay, now i have proxmox installed, raid does work and it does boot, but while in bios it shows the two disks as UEFI boot disks, they boot via grub and efibootmgr is unavailable - so no uefi installation.

Will a UEFI installation be possible in the future? For now it doesn't work, aborting the installation when starting the installation via uefi. No idea how i'm supposed to get it installed via systemd-boot instead of grub on UEFI.
 
I just tried to install using ZFS on a Samsung M.2 NVMe drive - however, it would not boot into Proxmox VE after installation.

It simply took me to a screen, that said “Reboot into firmware interface”.

However, when I re-did the installation using ext4 - I was able to boot successfully

Does that sound like a bug?
 
Probably you try to boot via UEFI, which doesnt seem to work yet. I can only install and boot via legacy.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!