Proxmox VE 6.0 beta released!

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Jul 4, 2019.

  1. martin

    martin Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    639
    Likes Received:
    360
    We're happy to announce the first beta release for the Proxmox VE 6.x family! It's based on the great Debian Buster (Debian 10) and a 5.0 kernel, QEMU 4.0, ZFS 0.8.1, Ceph 14.2.1, Corosync 3.0 and countless improvements and bugfixes.

    The new installer supports ZFS root via UEFI, for example you can boot a ZFS mirror on NVMe SSDs (using systemd-boot instead of grub). The full release notes will be available together with the final release announcement.

    This Proxmox VE release is a beta version. If you test or upgrade, make sure to create backups of your data first.

    Download
    http://download.proxmox.com/iso/

    Community Forum
    https://forum.proxmox.com

    Bugtracker
    https://bugzilla.proxmox.com

    Source code
    https://git.proxmox.com

    FAQ
    Q: Can I upgrade a 6.0 beta installation to the stable 6.0 release via apt?

    A: Yes, upgrading from beta to stable installation will be possible via apt.

    Q: Which apt repository can I use for Proxmox VE 6.0 beta?
    Code:
    deb http://download.proxmox.com/debian/pve buster pvetest
    Q: Can I install Proxmox VE 6.0 beta on top of Debian Buster?
    A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster

    Q: Can I dist-upgrade Proxmox VE 5.4 to 6.0 beta with apt?
    A: Please follow the upgrade instructions exactly, as there is a major version bump of corosync (2.x to 3.x)
    https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0

    Q: Can I upgrade my Proxmox VE 5.4 cluster with Ceph Luminous to 6.0 beta with Ceph Nautilus?
    A: This is a two step process. First, you have to upgrade Proxmox VE from 5.4 to 6.0, and afterwards upgrade Ceph from Luminous to Nautilus. There are a lot of improvements and changes, so please follow exactly the upgrade documentation.
    https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
    https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus

    Q: When do you expect the stable Proxmox VE 6.0 release?

    A: The final Proxmox VE 6.0 will be available as soon as Debian Buster is stable and all Proxmox VE 6.0 release critical bugs are fixed.

    Q: Where can I get more information about feature updates?
    A: Check our roadmap, forum, mailing list and subscribe to our newsletter.

    We invite you to test your hardware and your upgrade path and we are thankful for receiving your feedback, ideas, and bug reports.

    __________________
    Best regards,

    Martin Maurer
    Proxmox VE project leader
     
  2. Vladimir Bulgaru

    Joined:
    Jun 1, 2019
    Messages:
    102
    Likes Received:
    13
    Awesome! :D Thank you!
     
  3. badji

    badji Member

    Joined:
    Jan 14, 2011
    Messages:
    191
    Likes Received:
    11
    Top. Thank you.
     
  4. juniper

    juniper Member

    Joined:
    Oct 21, 2013
    Messages:
    50
    Likes Received:
    0
    a consideration about pve5to6:

    FAIL: ring0_addr 'srv3' of node 'srv3' is not an IP address, consider replacing it with the currently resolved IP address.

    is it correct? i always used hostname (resolved by system and dns) instead using private ip
     
  5. Dominic

    Dominic New Member
    Staff Member

    Joined:
    Mar 18, 2019
    Messages:
    28
    Likes Received:
    3
    From the Proxmox VE Administration Guide: "You may use plain IP addresses or also hostnames here. If you use hostnames ensure that they are resolvable from all nodes." This is also applicable when upgrading.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  6. juniper

    juniper Member

    Joined:
    Oct 21, 2013
    Messages:
    50
    Likes Received:
    0
    ...ok....and why pve5to6 show "FAIL"? All my hostnames are resolvable.
     
  7. morph027

    morph027 Active Member

    Joined:
    Mar 22, 2013
    Messages:
    413
    Likes Received:
    51
    Awesome, just need to find some time to re-install my home server for testing ;)
     
  8. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,251
    Likes Received:
    179
    We just had really often user run into so much issues with this, /etc/hosts files which forgot to update, or a address change which forgot that corosync cluster network used that too, and could have bad implications, or update it on node leave, join, re-join... So a hard notice seemed justified, especially as corosync-3 does not has the same auto-discovery feature as corosync 2 with multicast had, if the address was missing there it normally still worked (as it could find the other nodes by deriving the multicast group address from the clustername) - with the new that won't fly... Might re-evaluate, though, Dominic sent a patch lets look what the other (devs) think..
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  9. Sralityhe

    Sralityhe Member

    Joined:
    Jul 5, 2017
    Messages:
    58
    Likes Received:
    1
    Thanks for the release! Just upgraded my homeserver, worked flawless :cool::cool:
     
  10. casparsmit

    casparsmit Member

    Joined:
    Feb 24, 2015
    Messages:
    33
    Likes Received:
    0
    I seem to be running into some major problems during upgrade. The udev and/or systemd crash upon restarting udev after upgrade:

    When the upgrade proces comes to udev:

    -----
    Setting up udev (241-5) ...

    Configuration file '/etc/init.d/udev'
    ==> Modified (by you or by a script) since installation.
    ==> Package distributor has shipped an updated version.

    What would you like to do about it ? Your options are:
    Y or I : install the package maintainer's version
    N or O : keep your currently-installed version
    D : show the differences between the versions
    Z : start a shell to examine the situation
    The default action is to keep your current version.

    *** udev (Y/I/N/O/D/Z) [default=N] ? Y

    Installing new version of config file /etc/init.d/udev ...
    Installing new version of config file /etc/udev/udev.conf ...
    Job for systemd-udevd.service failed.
    See "systemctl status systemd-udevd.service" and "journalctl -xe" for details.

    invoke-rc.d: initscript udev, action "restart" failed.

    ● systemd-udevd.service - udev Kernel Device Manager
    Loaded: loaded (/lib/systemd/system/systemd-udevd.service; static; vendor preset: enabled)
    Drop-In: /etc/systemd/system/systemd-udevd.service.d
    └─override.conf
    Active: activating (start) since Fri 2019-07-05 15:06:05 CEST; 10ms ago
    Docs: man:systemd-udevd.service(8)
    man:udev(7)
    Main PID: 24312 (systemd-udevd)
    Tasks: 1
    Memory: 856.0K
    CGroup: /system.slice/systemd-udevd.service
    └─24312 /lib/systemd/systemd-udevd

    Jul 05 15:06:05 test01 systemd[1]: Starting udev Kernel Device Manager...
    dpkg: error processing package udev (--configure):
    installed udev package post-installation script subprocess returned error exit status 1

    Message from syslogd@test01 at Jul 5 15:06:05 ...
    kernel:[ 751.238697] systemd[1]: segfault at 50 ip 0000565475637f00 sp 00007ffe62628910 error 4 in systemd[5654755dd000+b1000]

    Broadcast message from systemd-journald@test01 (Fri 2019-07-05 15:06:05 CEST):
    systemd[1]: Caught <SEGV>, dumped core as pid 24359.

    Message from syslogd@test01 at Jul 5 15:06:05 ...
    systemd[1]: Caught <SEGV>, dumped core as pid 24359.

    Message from syslogd@test01 at Jul 5 15:06:05 ...
    systemd[1]: Freezing execution.

    Broadcast message from systemd-journald@test01 (Fri 2019-07-05 15:06:05 CEST):

    systemd[1]: Freezing execution.

    Failed to reload daemon: Connection reset by peer
    Failed to retrieve unit state: Failed to activate service 'org.freedesktop.systemd1': timed out

    -----

    From then on the system becomes unresponsive and every systemctl command fails.
    The system is also unbootable after reboot.

    I re-did the whole procedure and answered "N" at the question about the udev config files being altered.
    Then everything seems to be ok (upgrade finishes) but then again after a reboot the system is unbootable since systemd-udevd.service cannot start.

    Any hint on what might be going on here?
     
  11. Stoiko Ivanov

    Stoiko Ivanov Proxmox Staff Member
    Staff Member

    Joined:
    May 2, 2018
    Messages:
    1,269
    Likes Received:
    117
    This seems rather odd (segfaults in core binaries usually makes me think about broken RAM).
    Please open a separate thread, since this seems a more involved issue, and provide the following:
    * the diff of the shipped udev init-script and the one you have on your system (did you modify it)?
    * what is written in the systemd-override-conf for udev ('/etc/systemd/system/systemd-udevd.service.d/override.conf')?
    * the output of `dmesg`
    * the journal since the last boot `journalctl -b`

    Thanks!
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  12. badji

    badji Member

    Joined:
    Jan 14, 2011
    Messages:
    191
    Likes Received:
    11
    Good Job. It is missing just ceph mgr2 dashboard on port 8443 for now. Thank's.
     
    James Pass likes this.
  13. hibouambigu

    hibouambigu New Member

    Joined:
    Jun 27, 2019
    Messages:
    5
    Likes Received:
    0
    Ohhh yes what an exciting thing to wake up to :D Going to update on a test node today. Thank you team, for all your awesome work.

    Very much looking forward to the updates!

    From what I understand the full release notes will be shown once the stable build is done, but would you be able to give any teasers about some of the new Ceph Nautilus features being rolled into this build? ;) Will some of the shiny new stuff added in Nautilus make their way to the Proxmox management GUI, such as predictive RADOS storage device device health/lifespan?
     
  14. liquidox

    liquidox Member
    Proxmox Subscriber

    Joined:
    Sep 21, 2016
    Messages:
    30
    Likes Received:
    1
    Great news, can't wait to test it out!
     
  15. Breymja

    Breymja Member

    Joined:
    Aug 14, 2017
    Messages:
    38
    Likes Received:
    0
    Tried to install it on my new EX62 from Hetzner, got "Installation aborted - cannot continue" after DHCP, without any more information. Any idea what the issue could be or how i can find out?
     
  16. Sralityhe

    Sralityhe Member

    Joined:
    Jul 5, 2017
    Messages:
    58
    Likes Received:
    1
    Are you using the ISO to install?
    Try alt+F2 to switch to anouther TTY session, mybe you can see something there.
     
  17. Breymja

    Breymja Member

    Joined:
    Aug 14, 2017
    Messages:
    38
    Likes Received:
    0
    Sorry, yes i'm trying to install it via the iso. Unfortunately your tip did not give any more information.
    I assume there still is an issue to install on a system with only two NVMEs with UEFI? The installer seems to come up via legacy mode. (Unfortunately that SMB Share installation via LARA is extremely slow, so i don't know yet if that will work, wanted to have an UEFI installation though, given the changelog says its now supported)
     
  18. Breymja

    Breymja Member

    Joined:
    Aug 14, 2017
    Messages:
    38
    Likes Received:
    0
    Okay, now i have proxmox installed, raid does work and it does boot, but while in bios it shows the two disks as UEFI boot disks, they boot via grub and efibootmgr is unavailable - so no uefi installation.

    Will a UEFI installation be possible in the future? For now it doesn't work, aborting the installation when starting the installation via uefi. No idea how i'm supposed to get it installed via systemd-boot instead of grub on UEFI.
     
  19. victorhooi

    victorhooi Member

    Joined:
    Apr 3, 2018
    Messages:
    132
    Likes Received:
    6
    I just tried to install using ZFS on a Samsung M.2 NVMe drive - however, it would not boot into Proxmox VE after installation.

    It simply took me to a screen, that said “Reboot into firmware interface”.

    However, when I re-did the installation using ext4 - I was able to boot successfully

    Does that sound like a bug?
     
  20. Breymja

    Breymja Member

    Joined:
    Aug 14, 2017
    Messages:
    38
    Likes Received:
    0
    Probably you try to boot via UEFI, which doesnt seem to work yet. I can only install and boot via legacy.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice