Proxmox VE 4.4 released!

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
748
1,626
223
We are really excited to announce the final release of our Proxmox VE 4.4!

Most visible change is the new cluster and Ceph dashboard with improved Ceph Server toolkit. For LXC, we support unprivileged containers, CPU core limits and the new restart migration. We also updated the LXC templates for Debian, Ubuntu, CentOS, Fedora, Alpine and Arch.

The whole HA stack and the GUI are improved on several places and using Proxmox VE High Availability is now more user friendly. In clusters, you can define a dedicated migration network, quite useful if you heavily use (live) migrations.

Watch our short introduction video - What's new in Proxmox VE 4.4

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_4.4

ISO Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
http://download.proxmox.com/iso/

Upgrading
https://pve.proxmox.com/wiki/Downloads

Bugtracker
https://bugzilla.proxmox.com

A big THANK-YOU to our active community for all feedback, testing, bug reporting and patch submissions.
__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
Release notes do not mention Ceph updated to Jewel. Prior objectives for 4.4 made this a primary goal.

Could you comment on the status of Ceph?
 
  • Like
Reactions: chrone
Release notes do not mention Ceph updated to Jewel. Prior objectives for 4.4 made this a primary goal.

Could you comment on the status of Ceph?

Ceph 10.2.4/10.2.5 was released too close to PVE 4.4 to allow integration and testing. we are currently working on integrating it, and as soon as our tests are finished there will be updated packages for public testing - as always!
 
Ceph 10.2.4/10.2.5 was released too close to PVE 4.4 to allow integration and testing. we are currently working on integrating it, and as soon as our tests are finished there will be updated packages for public testing - as always!

Is there a way to integrate Proxmox Ceph Dashboard with our own Ceph cluster which is already running separately? Nice update btw! :)
 
  • Like
Reactions: robhost
Ceph 10.2.4/10.2.5 was released too close to PVE 4.4 to allow integration and testing. we are currently working on integrating it, and as soon as our tests are finished there will be updated packages for public testing - as always!

Does this mean that Ceph 1.2.3 (which is Jewel) was provided? Is Jewel specifically available or just Hammer? What is the new timeline for Jewel release? Thanks.
 
  • Like
Reactions: chrone
Does this mean that Ceph 1.2.3 (which is Jewel) was provided? Is Jewel specifically available or just Hammer? What is the new timeline for Jewel release? Thanks.

no - as has been discussed in this forum quite a lot, 10.2.3 has a bug that prevents upgrading a PVE-based Ceph setup from Hammer to Jewel.
 
no - as has been discussed in this forum quite a lot, 10.2.3 has a bug that prevents upgrading a PVE-based Ceph setup from Hammer to Jewel.

Got it.

If you were to guess on timing, are we looking at ~6 months before a Jewel release with Proxmox? I have a cluster I'm about to bring up I'd rather use Proxmox with for the monitoring and improved deploy structure. If we're talking a few weeks I can wait, but if we're talking months I probably can't afford to wait. Thanks.
 
Got it.

If you were to guess on timing, are we looking at ~6 months before a Jewel release with Proxmox? I have a cluster I'm about to bring up I'd rather use Proxmox with for the monitoring and improved deploy structure. If we're talking a few weeks I can wait, but if we're talking months I probably can't afford to wait. Thanks.

unless new major showstoppers are discovered, we are talking weeks, not months!
 
Hi,
what does it mean "implement new restart migration"?
That now the live migration for LXC works?

Check out the video tutorial (see first post) - A running container will be stopped, migrated and started again - all this within seconds.

Live migration without downtime will never work 100 % reliable - due to the nature of containers - therefore this is a valid workaround. If you cannot live with these view seconds downtime/restart, use qemu live migration.
 
  • Like
Reactions: chrone
Thank you Tom for reply. Today I did some tests.
It completely stop it, transfer disk and turn it on. But why Live migration of CT was possible with OpenVZ?

Old OpenVZ used a very customized kernel and there were also a limited feature-set inside OpenVZ container. And no, this was also never 100 % stable and there were quite some cases with problems.

Tools for LXC container live migration exists, but not everything can be live migrated (by design) and therefore we decided to NOT integrate this currently into Proxmox VE - maybe next year.

In contrast, qemu live migration is very reliable (as this is much easier).
 
No you do not need a subscription.
 
hi team, nice work, thanks for this.

a litte feature-request, you have a traffic graph, but not listing under this graph traffic in traffic out and complete traffic. can you add this here, please? well in graph it was not listed traffic in letters of day/week/month...
best regards
 
a litte feature-request, you have a traffic graph, but not listing under this graph traffic in traffic out and complete traffic. can you add this here, please? well in graph it was not listed traffic in letters of day/week/month...
best regards
i am not quite sure what you mean by this. the stats we have is the total traffic since the host was powered on, but i do not believe this is the thing you want
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!