Proxmox VE 5.0 released!

Status
Not open for further replies.

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,741
223
We are very happy to announce the final release of our Proxmox VE 5.0 - based on the great Debian 9 codename "Stretch" and a Linux Kernel 4.10.

New Proxmox VE Storage Replication Stack
Replicas provide asynchronous data replication between two or multiple nodes in a cluster, thus minimizing data loss in case of failure. For all organizations using local storage the Proxmox replication feature is a great option to increase data redundancy for high I/Os avoiding the need of complex shared or distributed storage configurations.

With Proxmox VE 5.0 Ceph RBD becomes the de-facto standard for distributed storage. Packaging is now done by the Proxmox team. The Ceph Luminous is not yet production ready but already available for testing. If you use Ceph, follow the recommendations below.

We also have a simplified procedure for disk import from different hypervisors. You can now easily import disks from VMware, Hyper-V, or other hypervisors via a new command line tool called ‘qm importdisk’.

Other new features are the live migration with local storage via QEMU, added USB und Host PCI address visibility in the GUI, bulk actions and filtering options in the GUI and an optimized NoVNC console.

And as always we have included countless bugfixes and improvements on a lot of places.

Video
Watch our short introduction video - What's new in Proxmox VE 5.0?

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_5.0

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
http://download.proxmox.com/iso/

Source Code
https://git.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

FAQ
Q: Can I upgrade a 5.x beta installation to the stable 5.0 release via apt?
A: Yes, upgrading from beta to stable can be done via apt.

Q: Can I install Proxmox VE 5.0 on top of Debian Stretch?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Stretch

Q: Can I dist-upgrade Proxmox VE 4.4 to 5.0 with apt dist-upgrade?
A: Yes, see https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0

Q: I am running Ceph Server on V4.4 in a production setup - should I upgrade now?
A: Not yet. Ceph packages in Proxmox VE 5.0 are based on the latest Ceph Luminous release (release candidate status). Therefore not yet recommended for production. But you should start testing in a testlab environment, here is one important wiki article - https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous

Many thank you's to our active community for all feedback, testing, bug reporting and patch submissions!

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
Great work, thank you!
Is Cloudinit planned in 5.x branch?
Does fresh documentation regarding drbd 8.x exist for PVE5?
 
The news about the new Replication system GUI is great, but neither this announcement, The Storage Replication wiki page, nor the Upgrade from 4.x to 5.0 wiki page mention how to handle the situation where you are already using pve-zsync. Will pve-zsync jobs get converted to the new replication type jobs automatically? Will pve-zsync keep working post-upgrade the same, with manual invention needed if you want those to become the new type of replication job? Or will pve-zsync jobs fail completely, and have to be removed as a part of the upgrade?
 
Whoa, congrats! Loving the new storage replication and online migration for local storage option! Keep it up!

Next target, built-in CephFS and the new Ceph OSD Bluestore :D

If you want to play with Bluestore, just create your OSDs on the commandline with:

> pveceph createosd /dev/XYZ --bluestore

(The GUI will still create XFS based OSDs)
 
  • Like
Reactions: fireon and chrone
The ability to install directly from the ISO without burning it on a cd (or fully write to the USB drive) is still unsupported

This force users to dedicate an USB stick to pve while all other major distro are able to install from the iso file

Any change to get this and a working console in the following days? If you change the terminal, the whole install crashed because it can't find the X screen on the newer terminal
 
The ability to install directly from the ISO without burning it on a cd (or fully write to the USB drive) is still unsupported

This force users to dedicate an USB stick to pve while all other major distro are able to install from the iso file

This is on our todo list, but a lot of work and not top priority. USB sticks are cheap these days.

Any change to get this and a working console in the following days? If you change the terminal, the whole install crashed because it can't find the X screen on the newer terminal

If you see a bug somewhere, please report it via:

https://bugzilla.proxmox.com
 
  • Like
Reactions: Manny Vazquez
This is on our todo list, but a lot of work and not top priority. USB sticks are cheap these days.

This is not an answer.
Yes they are cheap but is still a waste of hardware and require users to always use at least 2 USB stick: one for pve, one for everything else. Not really useful as the solution is possible.....
 
Also, I don't see changes in how ZFS raid is created. Currently PVE uses sd* instead the suggested way from ZFS: using something fixed across reboot like by-path or by-id or similiar

Changing this after the install requires multiple steps and Is not that easy

As it was discussed on this forum, is possible to have this fixed and a raid creates with fixed references to disks?
 
Congrats, good job!!! Upgrade from 4.4 to 5.0 works like a charm, except one must add the signing key for stretch. (gpg --search-keys EF0F382A1A7B6500 and then gpg --export EF0F382A1A7B6500 | apt-key add - )
 
  • Like
Reactions: BloodyIron
Also, I don't see changes in how ZFS raid is created. Currently PVE uses sd* instead the suggested way from ZFS: using something fixed across reboot like by-path or by-id or similiar
Not a problem, this are only symlinks. So you can change diskbus as you like. And you can see that only on rpool. On all other there is disk-by-id.
 
Not a problem, this are only symlinks. So you can change diskbus as you like. And you can see that only on rpool. On all other there is disk-by-id.

Can you explain it better?
How would change from sda to references by path?
 
This is not an answer.
Yes they are cheap but is still a waste of hardware and require users to always use at least 2 USB stick: one for pve, one for everything else. Not really useful as the solution is possible.....

Not sure what you really do in our community. Your posting style is not what we want here. Please accept the valid answer, and do not ask again and again, this is just a big waste of (my) time.
 
Hello PVE team and congratulations for the new functionnalities ! I was waiting for storage async replication this year for my customers :) It is very nice ! Maybe an improvment would be live migration in plus but I don't know if it is technically viable...
And small questions while I am here :
1) When you upgrade your cluster, you do it node by node, and then cluster has 4.4 and 5.0 PVE NODES during upgrade procedure. Is it right and OK ?
2) Doc says "no VM or CT running" : is it for the node you are upgrading, or for the whole cluster (ie wide service outage) ?
Thank you !
 
If you know what you are doing you should be able to update your cluster node by node, also live migration should/could work.

If you can´t afford downtime, take your time and test this upgrade in a test cluster setup. If you are unsure, you can also get help from our support team doing the analysis if this is the way to go for you and what could be the issues.

By default, we suggest shutting down, as this is works always.
 
Tom : good to know thank you. My customer will maybe use support tickets on that :)
Just to be sure : can we (safely) do something like :
- cluster 4.4 OK
- on node A : live migrate all VMs to a "standby" node Z
- now there is no more VMs running on node A
- upgrade node A, reboot it
- on node Z : live migrate all VMs to node A
- repeat for nodes B to Y
- upgrade node Z
Attended that cluster is in a "simple VE" configuration, for example with external NFS or iSCSI storage ?
And attended that a tests has been made on a test cluster in order to veryfy that this procedure really works in our environment :)
 
  • Like
Reactions: BloodyIron
Status
Not open for further replies.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!