Proxmox VE 5.0 beta1 released!

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
748
1,626
223
We are proud to announce the release of the first beta of our Proxmox VE 5.x family - based on the great Debian Stretch.

With the first beta we invite you to test your hardware and your upgrade path. The underlying Debian Stretch is already in a good shape and the 4.10 kernel performs outstandingly well. The 4.10 kernel for example allows running a Windows 2016 Hyper-V as a guest OS (nested virtualization).

This beta release provides already packages for Ceph Luminous v12.0.0.0 (dev), the basis for the next long-term Ceph release.

Whats next?
In the coming weeks we will integrate step by step new features into the beta, and we will fix all release critical bugs.

Download
https://www.proxmox.com/en/downloads

Alternate ISO download:
http://download.proxmox.com/iso/

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

FAQ
Q: Can I upgrade a current beta installation to the stable 5.0 release via apt?
A: Yes, upgrading from beta to stable installation will be possible via apt.

Q: Can I install Proxmox VE 5.0 beta on top of Debian Stretch?
A: Yes, see Install Proxmox V on Debian_Stretch

Q: Can I dist-upgrade Proxmox VE 4.4 to 5.0 beta with apt dist-upgrade?
A: Yes, you can.

Q: Which repository can i use for Proxmox VE 5.0 beta?
A: deb http://download.proxmox.com/debian/pve stretch pvetest

Q: When do you expect the stable Proxmox VE release?
A: The final Proxmox VE 5.0 will be available as soon as Debian Stretch is stable, and all release critical bugs are fixed (May 2017 or later).

Q: Where can I get more information about feature updates?
A: Check our roadmap, forum, mailing list and subscribe to our newsletter.

Please help us reaching the final release date by testing this beta and by providing feedback.

A big Thank-you to our active community for all feedback, testing, bug reporting and patch submissions.

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
Thank you for this!

I can't see any DRBD packages in the new repo, what is the plan for DRBD?
I have tried drbdmanage plugin supplied by Linbit on latest PVE 4.4 but unfortunately it has too many issues, it's not stable.
 
Hi,

this is the beta version and things can change.
So no there is no document for this.

You only need to change the repository to stretch

On a plain PVE there a 2 repository

/etc/apt/source.list
/etc/apt/source.list.d/pve-enterprise.list

the new PVE repository are
deb http://download.proxmox.com/debian/pve stretch pvetest
 
Last edited by a moderator:
I tried the installer from iso, and it stopped on partition the disk. Something locks up the disk so it can't partition it. I tried twice, and the same happend. I then tried the 4.4 installer and it worked as it should. So something behaves different with the installer on 5.0 vs the the one in 4.4.
But I managed to install with 4.4 and do a dist-upgrade to stretch.
So far it looks very nice beeing a beta. I saw it had the latest zfs released.
 
I tried the installer from iso, and it stopped on partition the disk. Something locks up the disk so it can't partition it. I tried twice, and the same happend. I then tried the 4.4 installer and it worked as it should. So something behaves different with the installer on 5.0 vs the the one in 4.4.
But I managed to install with 4.4 and do a dist-upgrade to stretch.
So far it looks very nice beeing a beta. I saw it had the latest zfs released.

could you give the exact error message? it seems like the new beta installer is less forgiving when handling old cruft on the disks, we'll probably add extra cleanup steps for some of the more common scenarios..
 
could you give the exact error message? it seems like the new beta installer is less forgiving when handling old cruft on the disks, we'll probably add extra cleanup steps for some of the more common scenarios..
I don't have the output, but as I remember looking at the screen after pushing alt+f2 it was when it tried to do a fdisk to partition it. It couldn't activate because it was busy and would use it at the next reboot. On the second tried I had removed all the partitions before I started, but same error occured.
 
Hi, will you provide upgrade path from 5.0 beta -> 5.0 final?
The point is if I want to try the beta now from 4.4 will I be able to come back to the stable branch after 5.0 is released?
 
Hi, will you provide upgrade path from 5.0 beta -> 5.0 final?
The point is if I want to try the beta now from 4.4 will I be able to come back to the stable branch after 5.0 is released?

>Yes, see the announcement/first post:

FAQ
Q: Can I upgrade a current beta installation to the stable 5.0 release via apt?
A: Yes, upgrading from beta to stable installation will be possible via apt.
 
  • Like
Reactions: chrone
I noticed that PVE 5.0 beta it's shipping with DRBD 8.4.7 kernel module (which is fine). Would it be possible to install drbd9 kernel module instead for people who want to test that one?
 
I noticed that PVE 5.0 beta it's shipping with DRBD 8.4.7 kernel module (which is fine). Would it be possible to install drbd9 kernel module instead for people who want to test that one?

you can use dkms for DRBD9 kernel module.
 
My test environment is two nested pve clusters with four nodes each. Three ceph nodes and one compute node in each cluster. I successfully upgraded each cluster from 4.4 to 5.0beta and am ready to upgrade ceph from jewel to luminous but am unsure how to proceed. Do I need to change the source in ceph.list? If so, to what?
 
Last edited:
My test environment is two nested pve clusters with four nodes each. Three ceph nodes and one compute node in each cluster. I successfully upgraded each cluster from 4.4 to 5.0beta and am ready to upgrade ceph from jewel to luminous but am unsure how to proceed. Do I need to change the source in ceph.list? If so, to what?

upgrading ceph is currently not tested very much. you need to take some steps before upgrading, as shown in the Ceph Kraken release notes, then switch to our ceph repositories for stretch and upgrade the packages (e.g., by calling "pveceph install"). a fresh install of luminous should work out of the box - with the caveat that ceph luminous seems to have some issues with IPv6 (which we are currently fixing and/or reporting upstream). please report any issues you encounter.
 
I noticed that PVE 5.0 beta it's shipping with DRBD 8.4.7 kernel module (which is fine). Would it be possible to install drbd9 kernel module instead for people who want to test that one?

Stretch has DRBD 8.9.0 version. Why 8.4.7 in Proxmox beta1?
 
Stretch has DRBD 8.9.0 version. Why 8.4.7 in Proxmox beta1?

The Debian Stretch default kernel uses kernel module 8.4.7, the same as in our kernel.

So by default you can work with DRBD8, it you want to use DRBD9 you just have to follow the Linbit HowTos and add their repository to your sources.list. This repo exists for 4.x already, I assume it will be available for 5.x also in future, contact Linbit/DRBD user mailing list for details.
 
The Debian Stretch default kernel uses kernel module 8.4.7, the same as in our kernel.

So by default you can work with DRBD8, it you want to use DRBD9 you just have to follow the Linbit HowTos and add their repository to your sources.list. This repo exists for 4.x already, I assume it will be available for 5.x also in future, contact Linbit/DRBD user mailing list for details.

Ah, it looks, i was misinformed, so drbd-utils are 8.9.x, but thats utils, not kernel module directly. Tom, thanks for info. I hope 5.0 will be even more stable and without problems for production, we will use them for new clusters in near future.
 
Hi -

Does Proxmox offer support for Ceph Bluestore yet? As I understand it, it has to be added to the pveceph tool since the direct ceph management tools are not supposed to be used with Proxmox.

Thanks!
 
  • Like
Reactions: chrone
could you give the exact error message? it seems like the new beta installer is less forgiving when handling old cruft on the disks, we'll probably add extra cleanup steps for some of the more common scenarios..

i have same behaviour .. this is error msg
 

Attachments

  • IMG_20170324_162435.jpg
    IMG_20170324_162435.jpg
    215.4 KB · Views: 81

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!