Proxmox VE 5.0 released!

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Jul 4, 2017.

Thread Status:
Not open for further replies.
  1. martin

    martin Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    633
    Likes Received:
    319
    We are very happy to announce the final release of our Proxmox VE 5.0 - based on the great Debian 9 codename "Stretch" and a Linux Kernel 4.10.

    New Proxmox VE Storage Replication Stack
    Replicas provide asynchronous data replication between two or multiple nodes in a cluster, thus minimizing data loss in case of failure. For all organizations using local storage the Proxmox replication feature is a great option to increase data redundancy for high I/Os avoiding the need of complex shared or distributed storage configurations.

    With Proxmox VE 5.0 Ceph RBD becomes the de-facto standard for distributed storage. Packaging is now done by the Proxmox team. The Ceph Luminous is not yet production ready but already available for testing. If you use Ceph, follow the recommendations below.

    We also have a simplified procedure for disk import from different hypervisors. You can now easily import disks from VMware, Hyper-V, or other hypervisors via a new command line tool called ‘qm importdisk’.

    Other new features are the live migration with local storage via QEMU, added USB und Host PCI address visibility in the GUI, bulk actions and filtering options in the GUI and an optimized NoVNC console.

    And as always we have included countless bugfixes and improvements on a lot of places.

    Video
    Watch our short introduction video - What's new in Proxmox VE 5.0?

    Release notes
    https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_5.0

    Download
    https://www.proxmox.com/en/downloads
    Alternate ISO download:
    http://download.proxmox.com/iso/

    Source Code
    https://git.proxmox.com

    Bugtracker
    https://bugzilla.proxmox.com

    FAQ
    Q: Can I upgrade a 5.x beta installation to the stable 5.0 release via apt?
    A: Yes, upgrading from beta to stable can be done via apt.

    Q: Can I install Proxmox VE 5.0 on top of Debian Stretch?
    A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Stretch

    Q: Can I dist-upgrade Proxmox VE 4.4 to 5.0 with apt dist-upgrade?
    A: Yes, see https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0

    Q: I am running Ceph Server on V4.4 in a production setup - should I upgrade now?
    A: Not yet. Ceph packages in Proxmox VE 5.0 are based on the latest Ceph Luminous release (release candidate status). Therefore not yet recommended for production. But you should start testing in a testlab environment, here is one important wiki article - https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous

    Many thank you's to our active community for all feedback, testing, bug reporting and patch submissions!

    __________________
    Best regards,

    Martin Maurer
    Proxmox VE project leader
     
    liquidox, BloodyIron, micro and 5 others like this.
  2. godlike

    godlike New Member

    Joined:
    Jul 23, 2014
    Messages:
    15
    Likes Received:
    1
    Great work, thank you!
    Is Cloudinit planned in 5.x branch?
    Does fresh documentation regarding drbd 8.x exist for PVE5?
     
  3. brwainer

    brwainer Member

    Joined:
    Jun 20, 2017
    Messages:
    35
    Likes Received:
    3
    The news about the new Replication system GUI is great, but neither this announcement, The Storage Replication wiki page, nor the Upgrade from 4.x to 5.0 wiki page mention how to handle the situation where you are already using pve-zsync. Will pve-zsync jobs get converted to the new replication type jobs automatically? Will pve-zsync keep working post-upgrade the same, with manual invention needed if you want those to become the new type of replication job? Or will pve-zsync jobs fail completely, and have to be removed as a part of the upgrade?
     
  4. chrone

    chrone Member

    Joined:
    Apr 15, 2015
    Messages:
    110
    Likes Received:
    14
    Whoa, congrats! Loving the new storage replication and online migration for local storage option! Keep it up!

    Next target, built-in CephFS and the new Ceph OSD Bluestore :D
     
    Dan Nicolae and gkovacs like this.
  5. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,449
    Likes Received:
    389
    If you want to play with Bluestore, just create your OSDs on the commandline with:

    > pveceph createosd /dev/XYZ --bluestore

    (The GUI will still create XFS based OSDs)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    fireon and chrone like this.
  6. chrone

    chrone Member

    Joined:
    Apr 15, 2015
    Messages:
    110
    Likes Received:
    14
    Awesome! :D
     
  7. Alessandro 123

    Joined:
    May 22, 2016
    Messages:
    594
    Likes Received:
    19
    I've read better iso installer in release notes
    What is changed in the iso installer?
     
  8. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,449
    Likes Received:
    389
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  9. Alessandro 123

    Joined:
    May 22, 2016
    Messages:
    594
    Likes Received:
    19
    The ability to install directly from the ISO without burning it on a cd (or fully write to the USB drive) is still unsupported

    This force users to dedicate an USB stick to pve while all other major distro are able to install from the iso file

    Any change to get this and a working console in the following days? If you change the terminal, the whole install crashed because it can't find the X screen on the newer terminal
     
  10. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,449
    Likes Received:
    389
    This is on our todo list, but a lot of work and not top priority. USB sticks are cheap these days.

    If you see a bug somewhere, please report it via:

    https://bugzilla.proxmox.com
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    Manny Vazquez likes this.
  11. Alessandro 123

    Joined:
    May 22, 2016
    Messages:
    594
    Likes Received:
    19
    This is not an answer.
    Yes they are cheap but is still a waste of hardware and require users to always use at least 2 USB stick: one for pve, one for everything else. Not really useful as the solution is possible.....
     
  12. Alessandro 123

    Joined:
    May 22, 2016
    Messages:
    594
    Likes Received:
    19
    Also, I don't see changes in how ZFS raid is created. Currently PVE uses sd* instead the suggested way from ZFS: using something fixed across reboot like by-path or by-id or similiar

    Changing this after the install requires multiple steps and Is not that easy

    As it was discussed on this forum, is possible to have this fixed and a raid creates with fixed references to disks?
     
  13. vooze

    vooze Member

    Joined:
    May 11, 2017
    Messages:
    77
    Likes Received:
    17
    So, how is ryzen working now?

    I'm ready to pull the trigger and buy, but I don't want to buy if I should wait for 5.1 and newer kernel.
     
  14. ProxFuchs

    ProxFuchs New Member

    Joined:
    Jul 4, 2017
    Messages:
    2
    Likes Received:
    2
    Congrats, good job!!! Upgrade from 4.4 to 5.0 works like a charm, except one must add the signing key for stretch. (gpg --search-keys EF0F382A1A7B6500 and then gpg --export EF0F382A1A7B6500 | apt-key add - )
     
    BloodyIron likes this.
  15. fireon

    fireon Well-Known Member
    Proxmox Subscriber

    Joined:
    Oct 25, 2010
    Messages:
    2,924
    Likes Received:
    168
    Not a problem, this are only symlinks. So you can change diskbus as you like. And you can see that only on rpool. On all other there is disk-by-id.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  16. Alessandro 123

    Joined:
    May 22, 2016
    Messages:
    594
    Likes Received:
    19
    Can you explain it better?
    How would change from sda to references by path?
     
  17. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,449
    Likes Received:
    389
    Not sure what you really do in our community. Your posting style is not what we want here. Please accept the valid answer, and do not ask again and again, this is just a big waste of (my) time.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  18. ghusson

    ghusson Member
    Proxmox Subscriber

    Joined:
    Feb 25, 2015
    Messages:
    82
    Likes Received:
    4
    Hello PVE team and congratulations for the new functionnalities ! I was waiting for storage async replication this year for my customers :) It is very nice ! Maybe an improvment would be live migration in plus but I don't know if it is technically viable...
    And small questions while I am here :
    1) When you upgrade your cluster, you do it node by node, and then cluster has 4.4 and 5.0 PVE NODES during upgrade procedure. Is it right and OK ?
    2) Doc says "no VM or CT running" : is it for the node you are upgrading, or for the whole cluster (ie wide service outage) ?
    Thank you !
     
  19. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,449
    Likes Received:
    389
    If you know what you are doing you should be able to update your cluster node by node, also live migration should/could work.

    If you can´t afford downtime, take your time and test this upgrade in a test cluster setup. If you are unsure, you can also get help from our support team doing the analysis if this is the way to go for you and what could be the issues.

    By default, we suggest shutting down, as this is works always.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  20. ghusson

    ghusson Member
    Proxmox Subscriber

    Joined:
    Feb 25, 2015
    Messages:
    82
    Likes Received:
    4
    Tom : good to know thank you. My customer will maybe use support tickets on that :)
    Just to be sure : can we (safely) do something like :
    - cluster 4.4 OK
    - on node A : live migrate all VMs to a "standby" node Z
    - now there is no more VMs running on node A
    - upgrade node A, reboot it
    - on node Z : live migrate all VMs to node A
    - repeat for nodes B to Y
    - upgrade node Z
    Attended that cluster is in a "simple VE" configuration, for example with external NFS or iSCSI storage ?
    And attended that a tests has been made on a test cluster in order to veryfy that this procedure really works in our environment :)
     
    BloodyIron likes this.
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice