Proxmox VE 7.0 (beta) released!

if do you recommend to continue using ZFS as root file system or to use the new BTRFS, always in RAID 1?

BTRFS is only introduced as technology preview in Proxmox VE 7.0, so in general not yet recommended for production setups.

That does not mean it'd necessarily break, but that it is new (in PVE) and with that less tested than existing solutions - if one has a lot of (good) experience with BTRFS they may decide to use it already on some more important systems.

That said, for the foreseeable future, even once BTRFS integration would not be a technology preview anymore, we'd still recommend using ZFS to anyone unsure what to choose for a local storage with snapshot and safe RAID capabilities.
 
  • Like
Reactions: guletz
So,will there be a switch or something in GUI?
No, this is not an option that can just be switched at runtime, an admin needs to decide what their best course of action is for their setup.

We'll currently only output a warning in a more telling manner on starts of problematic Containers (i.e., those with an older Systemd than version 231) and document how to do the switch and possible alternatives.

The general possibilities are:
  • Change to legacy cgroup, which will continue to work for the upcoming Proxmox VE 7.x releases, but probably will be dropped for the major release after that. This would need to be set in their respective bootloader config (see docs) and require a PVE host reboot afterwards.

  • Upgrade the systemd inside the Container to a newer version (v231+) if possible, some distros allow to use some sort of backports repository to opt-in to newer software.

  • Upgrade the whole container to a newer release of the distribution. CentOS 7, for example, was released 7 years ago, and with Containers the inner and outer stack (kernel/PID-1/...) need to have certain compat as they need to work together, so using a 7+ year-old technology-stack is something that is just not always the best fit for Containers, which brings me to my next point...

  • ... move to VMs. Virtual Machines have a much less interaction with the host, i.e., I can still setup very old OS versions like Windows XP or Debian Lenny there without any issue. Long-term that would be the most stable solution if you plan to stay on distribution releases like CentOS 7 or Ubuntu 16.04 for the foreseeable future.

  • Just for the record: migrating to another distro with more frequent releases but a stable upgrade path like Debian can also be an option for some, and if, I'd recommend that too. There the releases are ~two years apart, which means that one has a recent enough software stack to avoid problems. It's not bleeding edge, with at least a few months of freeze time, so it's neither "to new" to run into too many new issues. The upgrade path is pretty good, and it allows one to do major upgrades with the only update impact being a standard reboot of the system. FWIW, those are also some reasons for why Proxmox projects base on it.
 
Yes i know Centos7 is 7 years old, but there is not atm CEntos9 and Centos8 is EOL, so this is THE centos distro, it is not old or new, it is the only one, so there is no upgrade path.
I guess the best advice is to stop using lxc because the future is uncertain.
 
  • Like
Reactions: Fathi
Yes i know Centos7 is 7 years old, but there is not atm CEntos9 and Centos8 is EOL, so this is THE centos distro, it is not old or new, it is the only one, so there is no upgrade path.
CentOS 8 is not EOL until 2021-12-31, then there's CentOS stream which upstream communicated as replacement. Besides that there are already alternatives to CentOS like Alma Linux or Rocky Linux, both having an active community, FWICT, and I saw at least AlmaLinux mentioned as being already used by some Proxmox VE user, directly upgrading from CentOS 8.

And that's just one of the option I gave :-) You can still use the legacy cgroup controller, that's a one-time change.
I guess the best advice is to stop using lxc because the future is uncertain.
No it is not, you got your conclusion wrong. LXC future is just fine, and Distributions with a sane and realistic lifetime per release of a few years (not a decade) are working out just fine. Very old distribution releases, where the future of their newer releases got a damper are naturally having a harder time in the now already - but that really cannot be blamed on LXC (which itself has nothing to do with this "issue" anyway).
 
CentOS 8 is not EOL until 2021-12-31, then there's CentOS stream which upstream communicated as replacement. Besides that there are already alternatives to CentOS like Alma Linux or Rocky Linux, both having an active community, FWICT, and I saw at least AlmaLinux mentioned as being already used by some Proxmox VE user, directly upgrading from CentOS 8.

And that's just one of the option I gave :) You can still use the legacy cgroup controller, that's a one-time change.

No it is not, you got your conclusion wrong. LXC future is just fine, and Distributions with a sane and realistic lifetime per release of a few years (not a decade) are working out just fine. Very old distribution releases, where the future of their newer releases got a damper are naturally having a harder time in the now already - but that really cannot be blamed on LXC (which itself has nothing to do with this "issue" anyway).
Alma and Rocky don't have a point release, so it is same as using Proxmox beta, atleast for now.
So my conclusion is okay, if you intend to have a long-term machine, you should use VM, and user LXC for some short-term project, because it could get broken sometimes in the future.
ALso there are LTS distros, with SUSE giving even 13 years if i'm not mistaken.
 
So my conclusion is okay, if you intend to have a long-term machine, you should use VM, and user LXC for some short-term project, because it could get broken sometimes in the future.
Ah OK, I must admit that I interpreted your reply a bit different then, as in LXC future is uncertain, not as in the future is uncertain in general, my bad.

I think here it needs to be stated that the cgroup v1 -> v2 switch is a really big "boundary", there were serious drawbacks in the v1 design that fixing it would mean breaking user space, thus v2 was born, and the kernel devs maintaining it really do not want to go through this another time and take more care/planning to avoid that, so it's not like such intrusive changes happen often.

ALso there are LTS distros, with SUSE giving even 13 years if i'm not mistaken.
While I can get the desire to setup a system and not really touching it for that long, it, IMO, does not make that much sense for all but civil infrastructure projects (which have lifetimes of 30 - 100 years), most HW will fail earlier and if you run into issues ten year in, it may well be that the person responsible for its original has retired, and as those companies can foresee the future about as good as you and me I'd not trust that promise too much (cue back to CentOS 8 again).

But anyway, you'll still get a year out of updates for Proxmox VE 6.4, and you can switch to legacy cgroups in PVE 7, so you still could get away with use CentOS 7 in LXC until basically it's EOL in 2024, FWIW.
 
Is there any video or tutorial to guide users on how to install or test certain areas in pve7 better? Sometimes user have no clue on how to test it.
 
Funny thing with Proxmox 7: in a (fresh) unprivileged container (Proxmox Debian 10 template) with /dev/console set to Disabled, even root cannot echo Proxmox >/dev/null without getting a permission denied error. This works fine in Proxmox 6.4 (latest) and when /dev/console is set to Enabled. Should I have expected this or is this a bug?
 
Another thing in Proxmox 7: Removing the cpulimit of a (unprivileged) container while it is running fails with Parameter verification failed. (400) cpulimit: unable to hotplug cpulimit: closing file '/sys/fs/cgroup/lxc/110/cpu.max' failed - Invalid argument. This works fine with Proxmox 6.4; is it a bug?
 
The "EFI disks stored on Ceph now use the writeback caching-mode" is supposed to be in the actual beta, isn't it?
I have a host upgraded from 6.4 to 7-beta and created newly a VM. However in <VMID>.conf-file the EFI-entry has no "writeback" in it. It takes long time to boot. When modifying the entry with cache=writeback it boots instantly, however when doing another hardwarechange in the WebUI the EFI-entry gets deleted completely.
Therefore the question should the entry have cache=writeback in it for the efi-disk or is writeback handled elsewhere for the EFI-Disks? Or does it have to be enabled somewhere?
Not using KRBD if that matters.
 
Last edited:
You can already select only some disks for a separate root pool on installation amd create additional ones afterwards through the CLI or web interface.
That's not the same thing though. Its impossible to run encryption on the root pool with the current setup.
 
Funny thing with Proxmox 7: in a (fresh) unprivileged container (Proxmox Debian 10 template) with /dev/console set to Disabled, even root cannot echo Proxmox >/dev/null without getting a permission denied error. This works fine in Proxmox 6.4 (latest) and when /dev/console is set to Enabled. Should I have expected this or is this a bug?
Seems to be caused by differences in how the devices controller in cgroupv1 behaves vs what lxc emulates. We'll probably fix this by rolling out a default config for cgroupv2-devices to restore the previous behavior.
 
  • Like
Reactions: leesteken
That's not the same thing though. Its impossible to run encryption on the root pool with the current setup.
As long as the system gets booted from the 512MB ESP (either using systemd-boot or grub) there should be no problem in having encryption enabled on the rpool - have not explicitly set this up on PVE but AFAICT the initramfs should support reading the passphrase interactively during boot (else you can always have only the actual / dataset unencrypted and create a dataset for all your data which has encryption enabled).

I hope this helps!
 
As long as the system gets booted from the 512MB ESP (either using systemd-boot or grub) there should be no problem in having encryption enabled on the rpool - have not explicitly set this up on PVE but AFAICT the initramfs should support reading the passphrase interactively during boot (else you can always have only the actual / dataset unencrypted and create a dataset for all your data which has encryption enabled).

I hope this helps!
Ah, this was changed in 6.4 - my bad. I had a 6.3 system where I created an additional encrypted dataset on the rpool and made it unbootable (grub). I'm not sure which method is preferrable, ESP or bpool. As long as it works..
 
Yes, it's typo - 2.15.0 is version. I didn't found solution still...
There were some errors in the wiki examples (corrected now): you have to set for bond slaves, bond, bridge and vlan "autostart" (respectively keyword "auto" in /etc/network/interfaces) - when configuring via GUI it is done automatically.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!