Proxmox VE 7.0 (beta) released!

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
716
1,132
144
We are pleased to announce the first beta release of Proxmox Virtual Environment 7.0! The 7.x family is based on the great Debian 11 "Bullseye" and comes with a 5.11 kernel, QEMU 6.0, LXC 4.0, OpenZFS 2.0.4.

Note: The current release of Proxmox Virtual Environment 7.0 is a beta version. If you test or upgrade, make sure to first create backups of your data. We recommend Proxmox Backup Server to do so.

Here are some of the highlights of the Proxmox VE 7.0 beta version
  • Ceph Server: Ceph Pacific 16.2 is the new default. Ceph Octopus 15.2 comes with continued support.
  • BTRFS: modern copy on write file system natively supported by the Linux kernel, implementing features such as snapshots, built-in RAID, and self healing via checksums for data and metadata.
  • ifupdown2 is the default for new installations using the Proxmox VE official ISO.
  • QEMU 6.0 has support for io_uring as asynchronous I/O engine for virtual drives - this is now the default for newly started or migrated guests.
  • Countless GUI improvements
  • and much more...
Release notes
https://pve.proxmox.com/wiki/Roadmap

Download
http://download.proxmox.com/iso

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

FAQ
Q: Can I upgrade Proxmox VE 6.4 to 7.0 beta with apt?
A: Yes, please follow the upgrade instructions on https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0

Q: Can I upgrade a 7.0 beta installation to the stable 7.0 release via apt?
A: Yes, upgrading from beta to stable installation will be possible via apt.

Q: Which apt repository can I use for Proxmox VE 7.0 beta?
A:
Code:
deb http://download.proxmox.com/debian/pve bullseye pvetest

Q: Can I install Proxmox VE 7.0 beta on top of Debian 11 "Bullseye"?
A: Yes.

Q: Can I upgrade my Proxmox VE 6.4 cluster with Ceph Octopus to 7.0 beta?
A: This is a two step process. First, you have to upgrade Proxmox VE from 6.4 to 7.0, and afterwards upgrade Ceph from Octopus to Pacific. There are a lot of improvements and changes, so please follow exactly the upgrade documentation:
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
https://pve.proxmox.com/wiki/Ceph_Octopus_to_Pacific

Q: When do you expect the stable Proxmox VE 7.0 release?
A: The final Proxmox VE 7.0 will be available as soon as all Proxmox VE 7.0 release critical bugs are fixed.

Q: Where can I get more information about feature updates?
A: Check the roadmap, forum, the mailing list, and/or subscribe to our newsletter.

You are welcome to test your hardware and your upgrade path and we are looking forward to your feedback, bug reports, or ideas. Thank you for getting involved!
__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
Awesome! An observation and related question. I see that "EFI disks stored on Ceph now use the writeback caching-mode, improving boot times in case of slower or highly-loaded Ceph storages." Is "writeback" the recommended caching-mode for Ceph backed disks? I was under the impression "no cache" was the optimal choice for Ceph backed VM disks. Is this only applicable to the EFI partition?
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
7,874
952
163
33
Vienna
Awesome! An observation and related question. I see that "EFI disks stored on Ceph now use the writeback caching-mode, improving boot times in case of slower or highly-loaded Ceph storages." Is "writeback" the recommended caching-mode for Ceph backed disks? I was under the impression "no cache" was the optimal choice for Ceph backed VM disks. Is this only applicable to the EFI partition?
the problem with the efi disk is the (unusual) write pattern, ovmf does make a ton of small ios at boot, which can drastically increase the boot time.
setting the cache mode to writeback prevents the writes going one by one to the ceph cluster. having some caching on the efi disk should not be a problem, since this will not be accessed by multiple nodes simultaneously and only contains the boot order/resolution anyway...
 

Dunuin

Famous Member
Jun 30, 2020
6,084
1,406
149
Germany
Great news, thanks.

My PVE 6.4 is installed ontop of a Debian buster. PVE needs Debian Bullseye. What would be the right way to upgrade a Buster+PVE6 to Bullseye+PVE7? Do I just need to change the Debian repositories from buster to bullseye + the PVE repository from 6 to 7 and then run a apt update && apt full-upgrade?
 

spirit

Famous Member
Apr 2, 2010
5,628
608
133
www.odiso.com
Awesome! An observation and related question. I see that "EFI disks stored on Ceph now use the writeback caching-mode, improving boot times in case of slower or highly-loaded Ceph storages." Is "writeback" the recommended caching-mode for Ceph backed disks? I was under the impression "no cache" was the optimal choice for Ceph backed VM disks. Is this only applicable to the EFI partition?
before octopus, writeback was slowing read because of a big shared lock in the cache
since octopus, no more lock, read are fast, and writeback is really really faster for write :)

Code:
Here some iops result with 1vm - 1disk -  4k block   iodepth=64, librbd, no iothread.



                        nautilus-cache=none     nautilus-cache=writeback          octopus-cache=none     octopus-cache=writeback
          
randread 4k                  62.1k                     25.2k                            61.1k                     60.8k
randwrite 4k                 27.7k                     19.5k                            34.5k                     53.0k
seqwrite 4k                  7850                      37.5k                            24.9k                     82.6k
 
the problem with the efi disk is the (unusual) write pattern, ovmf does make a ton of small ios at boot, which can drastically increase the boot time.
setting the cache mode to writeback prevents the writes going one by one to the ceph cluster. having some caching on the efi disk should not be a problem, since this will not be accessed by multiple nodes simultaneously and only contains the boot order/resolution anyway...
before octopus, writeback was slowing read because of a big shared lock in the cache
since octopus, no more lock, read are fast, and writeback is really really faster for write :)

Code:
Here some iops result with 1vm - 1disk -  4k block   iodepth=64, librbd, no iothread.



                        nautilus-cache=none     nautilus-cache=writeback          octopus-cache=none     octopus-cache=writeback
         
randread 4k                  62.1k                     25.2k                            61.1k                     60.8k
randwrite 4k                 27.7k                     19.5k                            34.5k                     53.0k
seqwrite 4k                  7850                      37.5k                            24.9k                     82.6k

Thank you @spirit and @dcsapak for your detailed and thorough answers!
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,214
1,520
164
South Tyrol/Italy
shop.proxmox.com
My PVE 6.4 is installed ontop of a Debian buster. PVE needs Debian Bullseye. What would be the right way to upgrade a Buster+PVE6 to Bullseye+PVE7? Do I just need to change the Debian repositories from buster to bullseye + the PVE repository from 6 to 7 and then run a apt update && apt full-upgrade?
see:
FAQ
Q: Can I upgrade Proxmox VE 6.4 to 7.0 beta with apt?
A: Yes, please follow the upgrade instructions on https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
 
  • Like
Reactions: Dunuin

rocketpanda40

New Member
Jun 24, 2021
2
0
1
25
The legacy ifupdown is still supported in Proxmox VE 7, but may be dropped in a future major release

So with ifupdown2 becoming the new default and looking at ending support for ifupdown in the future, will that change be contingent on when ifupdown2 becomes compatible with ovs networks, or is there a potential that use of ovs will be deprecated?
 

randommen

New Member
Jun 24, 2021
6
1
3
26
Cool! I run PBS locally on my PVE, is upgrading PBS to Bullseye also on the roadmap or wouldn't I run into problems anyway?
If it is not on the roadmap I might move my PBS into a VM, because I don't want to be without PBS anymore :)
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
15,522
908
163
Cool! I run PBS locally on my PVE, is upgrading PBS to Bullseye also on the roadmap

yes, but not yet available. if you have pbs installed in parallel, you have to wait a bit (few weeks).
 

spirit

Famous Member
Apr 2, 2010
5,628
608
133
www.odiso.com
So with ifupdown2 becoming the new default and looking at ending support for ifupdown in the future, will that change be contingent on when ifupdown2 becomes compatible with ovs networks, or is there a potential that use of ovs will be deprecated?
ifupdown2 is already compatible with ovs (I have write the ovs plugin). do you have any problem with ifupdown2 && ovs ?
 

rocketpanda40

New Member
Jun 24, 2021
2
0
1
25
ifupdown2 is already compatible with ovs (I have write the ovs plugin). do you have any problem with ifupdown2 && ovs ?
Oh, perfect! I have to admit that when I installed 6.1 (or maybe it was 6.0?) it wasn't compatible, and then I haven't messed around with the server in several months due to other life issues. Just upgraded to 6.4 the other day and hadnt realized they were now playing nice together.

I'll give it a shot this evening and will report if I do experience any issues.

Thanks!
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,214
1,520
164
South Tyrol/Italy
shop.proxmox.com
So with ifupdown2 becoming the new default and looking at ending support for ifupdown in the future, will that change be contingent on when ifupdown2 becomes compatible with ovs networks, or is there a potential that use of ovs will be deprecated?
Those are rather unrelated, you can use ovs with ifupdown2. Also, ifupdown deprecation may be still a few major releases in the future, its mostly a subset in terms of features and config language of ifupdown2 anyway.
 
  • Like
Reactions: spirit

Astraea

Member
Aug 25, 2018
164
19
23
  • proxmox-backup
Does this mean that I can finally create a PBS machine to backup my various Proxmox nodes?
 
Dec 10, 2014
87
4
28
Hi to all,

we are planning to create a new Proxmox Cluster (with Ceph, possibly 16) within 3 weeks, and I would like to know if you plan to release Proxmox 7 Stable by that date and if not, would you advice to use Proxmox 7 Beta instead of Proxmox 6.4? In other words is Proxmox 7 ready for production environments? I have read Known Issues of Proxmox 7 Beta and for me they are irrelevant and completely negligible.

Thank you
 

spirit

Famous Member
Apr 2, 2010
5,628
608
133
www.odiso.com
Hi to all,

we are planning to create a new Proxmox Cluster (with Ceph, possibly 16) within 3 weeks, and I would like to know if you plan to release Proxmox 7 Stable by that date and if not, would you advice to use Proxmox 7 Beta instead of Proxmox 6.4? In other words is Proxmox 7 ready for production environments? I have read Known Issues of Proxmox 7 Beta and for me they are irrelevant and completely negligible.

Thank you
I think that proxmox7 will be release with debian11 will be release too. (so maybe end of summer, but debian don't have a release date currently). Personally, I'll go to 6.4, you still have 18month support for proxmox6 when 7will be released. no need to rush.
 

morlies

New Member
Dec 30, 2019
16
4
3
42
You are welcome to test your hardware and your upgrade path and we are looking forward to your feedback, bug reports, or ideas. Thank you for getting involved!
Are questions allowed in this thread? If not, where to post? I don't know if this is a bug or an error in my configuration.

I made the upgrade to 7.0. VM can be started without an issue. Privileged container don't work

UNPRIVILEGED CONTAINER -- Yes ==> Working
UNPRIVILEGED CONTAINER -- NO ==>Doesn't start, error posted below

Any idea?

edit: There was parameter in /etc/default/grub which limited the cgroups. I changed to default value "" what solved the issue. I don't know where this entry came from but it's working now. GRUB_CMDLINE_LINUX=""


Task viewer: CT 999 - Start AusgabeStatus Stopp cgfsng_setup_limits_legacy: 2764 Bad address - Failed to set "devices.deny" to "a" cgroup_tree_create: 808 Failed to setup legacy device limits cgfsng_payload_create: 1171 Numerical result out of range - Failed to create container cgroup lxc_spawn: 1644 Failed creating cgroups __lxc_start: 2073 Failed to spawn container "999" TASK ERROR: startup for container '999' failed


arch: amd64 cores: 1 hostname: testlxc memory: 512 net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=8A:5F:8A:79:2D:29,ip=dhcp,type=veth ostype: debian rootfs: local-lvm:vm-999-disk-0,size=8G swap: 512
 
Last edited:

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,214
1,520
164
South Tyrol/Italy
shop.proxmox.com
we are planning to create a new Proxmox Cluster (with Ceph, possibly 16) within 3 weeks, and I would like to know if you plan to release Proxmox 7 Stable by that date and if not, would you advice to use Proxmox 7 Beta instead of Proxmox 6.4? In other words is Proxmox 7 ready for production environments? I have read Known Issues of Proxmox 7 Beta and for me they are irrelevant and completely negligible.
The beta is, well, a beta, so even if all is pretty stable, and we do not plan to just break things at will, it still can happen that some issues will bite you, in which case we can only point any responsibility away from us as its in beta status.

We have no definite release date for the final Proxmox VE 7.0, it's ready when it's ready. But, if I'd have to guess I'd say it'll be rather weeks than months.

The upgrade path from 6.4 to 7.0 should be pretty much painless, especially for rather fresh setups, so IMO it won't matter too much if you start out with 6.4 now - but I can understand the desire to avoid any major upgrade immediately after the start of a new setup. I'd recommend evaluating things a bit, you could even try out an upgrade from 6.4 to 7.0 now already, would give you an idea of the amount of work and improve experience for any future major Proxmox VE upgrade.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,214
1,520
164
South Tyrol/Italy
shop.proxmox.com
I think that proxmox7 will be release with debian11 will be release too. (so maybe end of summer, but debian don't have a release date currently). Personally, I'll go to 6.4, you still have 18month support for proxmox6 when 7will be released. no need to rush.

Just slight corrections, not 18 months but rather a year, so 12 months after the new Proxmox VE major release.
Also, Debian 11 has its tentative release date set on 31. July, as we're shipping the most core packages and installer ourselves, we're not tied hard to release after that date. So we're going to continually test Proxmox VE 7 and the upgrade from 6.4 to i, and check the feedback we get from the community to decide when we're ready.
 

wbumiller

Proxmox Staff Member
Staff member
Jun 23, 2015
707
109
63
edit: There was parameter in /etc/default/grub which limited the cgroups. I changed to default value "" what solved the issue. I don't know where this entry came from but it's working now. GRUB_CMDLINE_LINUX=""
Could you tell us what parameters you had set there? In theory some things like for example moving only a subset of cgroups to v2 *could* work with lxc (but I wouldn't recommend it for production use).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!