Proxmox VE 6.2 released!

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,741
223
We are proud to announce the general availability of our virtualization management platform Proxmox VE 6.2.

It's built on Debian Buster 10.4 and a 5.4 longterm Linux kernel, QEMU 5.0, LXC 4.0, ZFS 0.8.3, Ceph 14.2.9 (Nautilus), and ZFS 0.8.3.

This release brings a built-in validation of domains for Let's Encrypt TLS certificates via the DNS-based challenge mechanism, full support for up to eight corosync network links, support for Zstandard for Backup/Restore, and a new LDAP sync for users and groups and full support for API tokens.

Countless bugfixes and smaller improvements are included as well, see the full release notes.

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.2

Video tutorial
https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-6-2

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
http://download.proxmox.com/iso

Documentation
https://pve.proxmox.com/pve-docs

Community Forum
https://forum.proxmox.com

Source Code
https://git.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

FAQ
Q: Can I dist-upgrade Proxmox VE 6.x to 6.2 with apt?
A: Yes, just via GUI or via CLI with apt update && apt dist-upgrade

Q: Can I install Proxmox VE 6.2 on top of Debian Buster?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster

Q: Can I upgrade my Proxmox VE 5.4 cluster with Ceph Luminous to 6.x and higher with Ceph Nautilus?
A: This is a two step process. First, you have to upgrade Proxmox VE from 5.4 to 6.2, and afterwards upgrade Ceph from Luminous to Nautilus. There are a lot of improvements and changes, please follow exactly the upgrade documentation.
https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus

Q: Where can I get more information about future feature updates?
A: Check our roadmap, forum, mailing lists, and subscribe to our newsletter.

A big THANK YOU to our active community for all your feedback, testing, bug reporting and patch submitting!

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
Cool! Since this uses LXC 4.0, is now cgroup v2 (unified hierarchy) used? Especialy for memory controller to limit swap properly?
 
Cool! Since this uses LXC 4.0, is now cgroup v2 (unified hierarchy) used? Especialy for memory controller to limit swap properly?

Some initial cgroupv2 support from our container toolkit (pct) is in fact included, but you need to boot the host into cgroupv2 to be able to use it, both versions cannot really co exist on a system. The support will be improved over the next releases.
 
Some initial cgroupv2 support from our container toolkit (pct) is in fact included, but you need to boot the host into cgroupv2 to be able to use it, both versions cannot really co exist on a system. The support will be improved over the next releases.

Nice to hear that. What are the downsides of booting with cgroupv2 (and therefore disabling cgroupv1)? Will it break something important?
 
What are the downsides of booting with cgroupv2 (and therefore disabling cgroupv1)? Will it break something important?

For now I wouldn't recommend it for production systems. It really shouldn't break anything but some limits aren't yet enforced at all and some CT distros may complain about v2 if they're on older releases, I'd guess.
 
Hi! Congratulations on the release!
How can I install this with encryption for data at rest for the OS disk?
This is important because my private cloud is not as securely physically stored as it should be. Any better and more secure solutions is outside of my budget. Therefore I seek to encrypt data at rest for all data storage containers, including the OS disk.
 
Hi! Congratulations on the release!
How can I install this with encryption for data at rest for the OS disk?
This is important because my private cloud is not as securely physically stored as it should be. Any better and more secure solutions is outside of my budget. Therefore I seek to encrypt data at rest for all data storage containers, including the OS disk.

Please open a new thread for this question.
 
Awesome Release! Thanks for the all of your work!

  • Enable support for Live-Migration with replicated disks

I updated from 6.1 to 6.2 but when I try to migrate a VM I get the following error Message:

2020-05-12 15:51:12 starting migration of VM 103 to node 'XXX' (XX.XX.XX.XX
2020-05-12 15:51:12 found local, replicated disk 'local-zfs:vm-103-disk-0' (in current VM config)
2020-05-12 15:51:12 can't migrate local disk 'local-zfs:vm-103-disk-0': can't live migrate attached local disks without with-local-disks option
2020-05-12 15:51:12 ERROR: Failed to sync data - can't migrate VM - check log
2020-05-12 15:51:12 aborting phase 1 - cleanup resources
2020-05-12 15:51:12 ERROR: migration aborted (duration 00:00:00): Failed to sync data - can't migrate VM - check log
TASK ERROR: migration aborted

Or is live Migration not possible in the GUI?
 
Or is live Migration not possible in the GUI?

It is, albeit the "Target Storage" selector isn't yet smart enough to prefer already replicated storages for a VM, if any, so you need to select the correct one yourself for now.
Also, ensure you reloaded the webinterface cleanly (as else you may still use a backend worker with the old code not yet phased out).
 
I only got 1 Storage (for VM and CT Images) on each host.

Seems like the issue was caused by HA. As soon as I removed the VM from the HA group it migrated totally fine.

Is it possible, that it the migration, if issued by HA, is missing the "with-local-disks" option for migration? (Linke the logs are saying ^^)
 
I only got 1 Storage (for VM and CT Images) on each host.

Seems like the issue was caused by HA. As soon as I removed the VM from the HA group it migrated totally fine.

Is it possible, that it the migration, if issued by HA, is missing the "with-local-disks" option for migration? (Linke the logs are saying ^^)

Hmm, we pass force to ignore some local checks but yes, "with-local-disks" isn't yet passed.Can you please open a enhancement request over at our bug an feature tracker: https://bugzilla.proxmox.com/
 
my openvswitch config failed after upgrading from 6.1 to 6.2, so after reboot I have no networking only console access

now I have a constant messages in console that makes it unable to do anything in the console to trouble I get get the below messages

libceph: connect error -101

can someone please help

attached is the network config file before the upgrade

its is Openvswitch with 3 bond interfaces and vlans
 

Attachments

  • interfaces_pve_ceph_01_bond_2019-11-27.txt
    3.7 KB · Views: 150
Just set up ACME using DNS and it's easy mode!

Looking forward to not having to copy my wildcard certs to each proxmox node when the old ones expired.
 
that is a symptom not the cause of the actual error. Please open a new thread and check the syslog / journal for errors, post your network config.
Hi t.lamprecht

I have managed to get this message gone by using the below commandm

sudo dmesg -n 1

after this the messages stopped and I could investigate what went wrong,

It seems the package below was not upgraded or removed during upgrade and this is required for Bonding

ifenslave

Please can you confirm if there is a new way of doing bonding interfaces on Openvswitch?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!