Proxmox VE 6.0 released!

Marco Trevisan

New Member
Nov 6, 2018
7
6
3
45
Thanks for the reply, so the upgrade procedure would be something like this?




add this in nano /etc/apt/sources.list

deb http://download.proxmox.com/debian/pve stretch pve-no-subscription



then run

apt update

apt upgrade

apt clean && apt autoclean


then reboot
Stoiko's reply contains the link to the procedure I followed. Sorry I didn't add it in my former response, I supposed you had already read it and was unsure about the "go or not go".
Depending on how mission critical is your cluster and your experience level, you may feel uncomfortable with the upgrading. In such case I'm not a Proxmox commercial but you can always get the support level you need with one of the available paid subscriptions, which btw unlocks access to the enterprise repo too ;-)

In my case the upgrading has proven to be worth the effort - by far.

Regards,
Marco
 
  • Like
Reactions: killmasta93

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
3,067
524
133
South Tyrol/Italy
shop.maurer-it.com
can we do rolling update by migrate VPS to other node and keep it online or we have to force shutdown all VPSes before doing upgrade?
yes, see also: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0#Move_important_Virtual_Machines_and_Containers

I could see corosync v3 has a lot of issues here: https://forum.proxmox.com/threads/pve-5-4-11-corosync-3-x-major-issues.56124/ so is it stable now?
Yes, should be better. We together with the upstream Developers could identify a real issue which was a bit hard to reproduce and fix a possible crash of corosync/knet. Some other smaller fixes where also made. There are a few things left to optimize, but no known crash or hardbug with kronosnet 1.12 anymore..

Just be sure to closely follow https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0 when upgrading.
 

jtracy

New Member
Aug 30, 2018
21
0
1
46
For anyone upgrading to 6.0 please read Issues to be aware of for buster. I and a few other users here have been hit with problems listed in section 5.1.5 where the network names are not coming in the same and the cluster will not come up correctly.

Mods, can we get the documentation for the upgrade process list this document as something to review before attempting the upgrade?
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
3,067
524
133
South Tyrol/Italy
shop.maurer-it.com
Oct 16, 2019
2
3
3
We're excited to announce the final release of our Proxmox VE 6.0! It's based on the great Debian 10 codename "Buster" and the latest 5.0 Linux kernel, QEMU 4.0, LXC 3.1.0, ZFS 0.8.1, Ceph 14.2, Corosync 3.0, and more.
[...]

FAQ
Q: Can I dist-upgrade Proxmox VE 5.4 to 6.0 with apt?

A: Please follow the upgrade instructions exactly, as there is a major version bump of corosync (2.x to 3.x)
https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0

Q: Can I install Proxmox VE 6.0 on top of Debian Buster?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster
[...]
Besten Dank an die Entwickler und für uns hat sich proxmox VE schon länger als ein sehr zuverlässiges Werkzeug bestätigt.

Mit Hilfe der guten Informationen und der gut gemachten Anleitung konnten wir einen Umstieg von VE 5.4 auf 6.0 zusammen mit dem Übergang von Debian 9 nach 10.1 kürzlich, trotz des damit (als Community-Subscriber notwendiger Weise ) verbundenen eigenen Aufwands, sehr gut und problemfrei bewältigen. Das Umstiegs-Skript pve5to6 hat uns zudem wertvolle Hinweise auf einige Lässlichlkeiten in der eigenen Konfigurierung gegeben, womit nach den nötigen Verbesserungen der anschließende Übergang sehr gut funktioniert hat.


My sincere thanks to the developers. The proxmox VE environment for some length of time proved to be an effective and reliable tool for us.

By help of the a.m. documentation and guidelines we were able to perform a rather smooth migration from VE 5.4 to 6.0 while stepping up Debian 9 to 10.1 in the same effort. Being a Community-Subscriber we had to provide our own efforts and own expertise but found the proposed procedure quite reliable and very helpful. In addition the script pve5to6 enabled us to indentify some minor issues in advance and thus the smooth migration.
 
Last edited:

Kaboom

Member
Mar 5, 2019
100
10
18
48
I finally upgraded succesfully our cluster to Proxmox 6, Corosync 3 and Ceph Nautilus without (!) any downtime. It took me about 2 weeks. There were some issues, like you have to remove kernelcare first if you start the Proxmox upgrade. And had problems with my OSD's that would not start after a restart of the node when I upgraded Ceph to Nautilus. But I deleted them all one by one and created them with the GUI and everything runs great.

It surprises me again how stable Ceph works, me and Ceph are big friends :)

I also want to thank the Proxmox team for there great manual and support, and also the forum members that helped me out.
 

nttec

Active Member
Jun 1, 2016
54
0
26
36
is it possible to update corosync only without any problem? I am not planning to upgrade to 6 yet until I got my back up server up for this.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
3,067
524
133
South Tyrol/Italy
shop.maurer-it.com
is it possible to update corosync only without any problem? I am not planning to upgrade to 6 yet until I got my back up server up for this.
Yes. Albeit the PVE 5.4 cluster management stack is not able to cope with adding nodes once corosync 4/kronosnet is installed.
So I'd only do it if your HW setup is stable and you plan to upgrade relatively soon.
 

Sarlis Dimitris

New Member
Oct 19, 2018
26
2
3
40
To all proxmox staff, a general question regarding video tutorials.
Are you planning to release any new videos and specificaly for Ceph-Nautilus setup in version 6?
thanks
 

Jeffwolf

New Member
Apr 16, 2020
2
2
3
43
I did a standalone box yesterday with the same problem. Had to kill a few processes to get it moving again. Got the same problem again on the first of my home boxes. I didn't record any notes yesterday, but today I had the following:
Code:
"Setting up lxc-pve (3.1.0-61) ..."
Resolved by killing:
/bin/systemctl restart lxc-monitord.service lxc-net.service
/bin/sh - /usr/lib/x86_64-linux-gnu/lxc/lxc-net start
/bin/sh /var/lib/dpkg/info/lxc-pve.postinst configure 3.1.0-3


"Setting up pve-ha-manager (3.0-2) ..."
Resolved by killing:
/bin/systemctl restart pve-ha-lrm.service
Edit: After the dist-upgrade finishes, I was left with several pve packages not configured... A reboot it required to get these to configure successfully. Attempting to configure them before rebooting just results in the same freeze.

I have managed to get a fully functional proxmox working inside of AWS. only condition is that if you want to run KVM you will need a bare metal option. However for LXC it works just fine.
I still Have to figure out the networking portion as AWS networking is very customized so still looking for a workaround.

If you guys would like to know how.... let me know.
 
  • Like
Reactions: shantanu
Jan 21, 2017
322
45
33
Berlin
I have managed to get a fully functional proxmox working inside of AWS. only condition is that if you want to run KVM you will need a bare metal option. However for LXC it works just fine.
I still Have to figure out the networking portion as AWS networking is very customized so still looking for a workaround.

If you guys would like to know how.... let me know.
Interesting. What is your use case for that?
Just from the pricing point of view this seems very expensive. AWS does more or less already provide what you can do with pve, so why the additional layer?
 

Jeffwolf

New Member
Apr 16, 2020
2
2
3
43
Interesting. What is your use case for that?
Just from the pricing point of view this seems very expensive. AWS does more or less already provide what you can do with pve, so why the additional layer?

hello

thanks for the response

Use case: I had to find a way to reduce cost and provide a relatively easy way to move from one provider to another.

I managed to reduce cost by taking 200 ec2 instances costing just over $30 000 a month to a 3 node cluster with ceph enabled costing just over $6000 a month.

I also built a similar setup in GCP for roughly the same cost... this would be a warm redundant solution with daily backups for any code changes and so on.

I am not working the remaining 600 ec2 we have running.... so you can save up to 75% of your cost if configured correctly

In these times reducing cost in anyway saves jobs....
 
  • Like
Reactions: Alwin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!