Proxmox VE 6.0 released!

Marco Trevisan

New Member
Nov 6, 2018
7
6
3
45
just curious i have a proxmox on 5.4 not yet on cluster the idea is to upgrade to 6.0 because cannot get ZFS to boot im guessing because it running HP smart array P440ar on HBA mode (even disabled uefi) but saw on 6.0 it boots up on uefi my question is how stable is it? going to try out this week and if it works how stable or recomended to combine it on a cluster with 5.4 on the other hosts?
My experience with the upgrade was very smooth, the current 6.0 (6.0-7) is very stable on our cluster. A "lazy sysadmin" can freeze it and forget it for a while.
In the first step of my upgrade only one node was running 6.0. I did a breief test on what was working on the 5.4 web UI and had no particular problems. Could still stop/start (even the 6.0 node machines), replicate vms and also migrating is supported from a lower version to a higher one, the other way around should work but it's not supported.
It goes by itself however that you should upgrade all the cluster nodes as soon as you can. No need to have different configurations and kernels on the cluster nodes.
It also goes by itself that you have to stick closely to the upgrade procedure, which is not difficult but missing steps would lead to issues I guess.

Regards,
Marco
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
2,303
240
63
  • Like
Reactions: killmasta93

Marco Trevisan

New Member
Nov 6, 2018
7
6
3
45
Thanks for the reply, so the upgrade procedure would be something like this?




add this in nano /etc/apt/sources.list

deb http://download.proxmox.com/debian/pve stretch pve-no-subscription



then run

apt update

apt upgrade

apt clean && apt autoclean


then reboot
Stoiko's reply contains the link to the procedure I followed. Sorry I didn't add it in my former response, I supposed you had already read it and was unsure about the "go or not go".
Depending on how mission critical is your cluster and your experience level, you may feel uncomfortable with the upgrading. In such case I'm not a Proxmox commercial but you can always get the support level you need with one of the available paid subscriptions, which btw unlocks access to the enterprise repo too ;-)

In my case the upgrading has proven to be worth the effort - by far.

Regards,
Marco
 
  • Like
Reactions: killmasta93

fbifido

New Member
Aug 28, 2019
7
1
3
43
Hi!
Today I upgraded to PVE6 on a 4-node cluster. The update went smoothly! :)
Ceph updated to Nautilus without problems too!
Thanks for the great work!

Best regards
Gosha
You upgrade from which Version?
Do you have any fio/performance before & after stats?
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,816
279
103
South Tyrol/Italy
can we do rolling update by migrate VPS to other node and keep it online or we have to force shutdown all VPSes before doing upgrade?
yes, see also: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0#Move_important_Virtual_Machines_and_Containers

I could see corosync v3 has a lot of issues here: https://forum.proxmox.com/threads/pve-5-4-11-corosync-3-x-major-issues.56124/ so is it stable now?
Yes, should be better. We together with the upstream Developers could identify a real issue which was a bit hard to reproduce and fix a possible crash of corosync/knet. Some other smaller fixes where also made. There are a few things left to optimize, but no known crash or hardbug with kronosnet 1.12 anymore..

Just be sure to closely follow https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0 when upgrading.
 

jtracy

New Member
Aug 30, 2018
7
0
1
45
For anyone upgrading to 6.0 please read Issues to be aware of for buster. I and a few other users here have been hit with problems listed in section 5.1.5 where the network names are not coming in the same and the cluster will not come up correctly.

Mods, can we get the documentation for the upgrade process list this document as something to review before attempting the upgrade?
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,816
279
103
South Tyrol/Italy
Oct 16, 2019
2
3
3
We're excited to announce the final release of our Proxmox VE 6.0! It's based on the great Debian 10 codename "Buster" and the latest 5.0 Linux kernel, QEMU 4.0, LXC 3.1.0, ZFS 0.8.1, Ceph 14.2, Corosync 3.0, and more.
[...]

FAQ
Q: Can I dist-upgrade Proxmox VE 5.4 to 6.0 with apt?

A: Please follow the upgrade instructions exactly, as there is a major version bump of corosync (2.x to 3.x)
https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0

Q: Can I install Proxmox VE 6.0 on top of Debian Buster?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster
[...]
Besten Dank an die Entwickler und für uns hat sich proxmox VE schon länger als ein sehr zuverlässiges Werkzeug bestätigt.

Mit Hilfe der guten Informationen und der gut gemachten Anleitung konnten wir einen Umstieg von VE 5.4 auf 6.0 zusammen mit dem Übergang von Debian 9 nach 10.1 kürzlich, trotz des damit (als Community-Subscriber notwendiger Weise ) verbundenen eigenen Aufwands, sehr gut und problemfrei bewältigen. Das Umstiegs-Skript pve5to6 hat uns zudem wertvolle Hinweise auf einige Lässlichlkeiten in der eigenen Konfigurierung gegeben, womit nach den nötigen Verbesserungen der anschließende Übergang sehr gut funktioniert hat.


My sincere thanks to the developers. The proxmox VE environment for some length of time proved to be an effective and reliable tool for us.

By help of the a.m. documentation and guidelines we were able to perform a rather smooth migration from VE 5.4 to 6.0 while stepping up Debian 9 to 10.1 in the same effort. Being a Community-Subscriber we had to provide our own efforts and own expertise but found the proposed procedure quite reliable and very helpful. In addition the script pve5to6 enabled us to indentify some minor issues in advance and thus the smooth migration.
 
Last edited:

Kaboom

Member
Mar 5, 2019
71
8
8
48
I finally upgraded succesfully our cluster to Proxmox 6, Corosync 3 and Ceph Nautilus without (!) any downtime. It took me about 2 weeks. There were some issues, like you have to remove kernelcare first if you start the Proxmox upgrade. And had problems with my OSD's that would not start after a restart of the node when I upgraded Ceph to Nautilus. But I deleted them all one by one and created them with the GUI and everything runs great.

It surprises me again how stable Ceph works, me and Ceph are big friends :)

I also want to thank the Proxmox team for there great manual and support, and also the forum members that helped me out.
 

nttec

Member
Jun 1, 2016
44
0
11
36
is it possible to update corosync only without any problem? I am not planning to upgrade to 6 yet until I got my back up server up for this.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,816
279
103
South Tyrol/Italy
is it possible to update corosync only without any problem? I am not planning to upgrade to 6 yet until I got my back up server up for this.
Yes. Albeit the PVE 5.4 cluster management stack is not able to cope with adding nodes once corosync 4/kronosnet is installed.
So I'd only do it if your HW setup is stable and you plan to upgrade relatively soon.
 

Sarlis Dimitris

New Member
Oct 19, 2018
13
2
3
39
To all proxmox staff, a general question regarding video tutorials.
Are you planning to release any new videos and specificaly for Ceph-Nautilus setup in version 6?
thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!