Proxmox VE 7.0 released!

Questions regarding ZFS and grub:

1. Can you provide a roadmap for grub-related upgrades? There are two issues tying people to grub as far as I know: First, old proxmox installs do not have a separate 512M EFI partition, so it seems a full reinstall may be required for these machines. Secondly, when using older hardware, will grub continue to be supported?

2. The brand-new openZFS release 2.1.0 fixes several kernel panics, so I hope you can integrate this soon to PVE 7.

3. Given the above, would proxmox consider separating out a "bpool" like Ubuntu does - the feature flags from ZFS 2.1.0 allow preventing incompatible upgrades - so boot is much less likely to break in the future?

p.s. The FAQ seems to suggest that "zpool upgrade" will immediately break boot (https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool#Problem_Description) but this is not so - it is only once a feature has been made "active" that boot is broken.
 
1. Can you provide a roadmap for grub-related upgrades? There are two issues tying people to grub as far as I know: First, old proxmox installs do not have a separate 512M EFI partition, so it seems a full reinstall may be required for these machines. Secondly, when using older hardware, will grub continue to be supported?
For 1.a it depends on the specifics, i.e., if you have any extra disk you can also move the bootloader off to that. You can sometimes also move around to make room for a new partition with a few 100 MiB and use that one. For 2.b, yes GRUB in general will be continued to get support for the foreseeable future, but I have a feeling that you mean more the support for HW which only has legacy BIOS support. For that PVE 7.x release already have shown that they still support it, beyond that I naturally cannot promise anything, but it would definitively come to a surprise to myself if we'd need to stop supporting GRUB on legacy BIOS in just a few years.

2. The brand-new openZFS release 2.1.0 fixes several kernel panics, so I hope you can integrate this soon to PVE 7.
Yes, will be packaged and released after it survived internal testing. The 2.0.5 will be the next release, it's a smaller jump now and also got most of the important fixes the 2.1.0 got, minus the new features naturally.

3. Given the above, would proxmox consider separating out a "bpool" like Ubuntu does - the feature flags from ZFS 2.1.0 allow preventing incompatible upgrades - so boot is much less likely to break in the future?
That has already happened by booting from the vfat based partition also in the legacy BIOS case since Proxmox VE 6.4 and for UEFI booted systems since over two years, introduced in the PVE 6.0 beta.
Note, that while the incompat changes feature pinning helps a lot and is a nice thing to have, GRUB (for example), still uses their own ZFS code and tended to make problems in the past even without any newer features. Using the very simple and widely supported vfat allows the basic boot issues completely in a simple manner.
For support to boot a ZFS with an older kernel the compat flags will definitively help.
 
Thank you for VE 7.0 !!
I upgraded my homelab from 6.4 to 7.0 without any problems.
Only thing I notice now with version 7.0 is the dramatic increase in memory consumption with LXC containers.
In 6.4 my Ubuntu and Debian containers used ~45-50MB (256MB given).
But now in 7.0 the same containers are using ~155-160MB (512MB given)
How come?
 
Only thing I notice now with version 7.0 is the dramatic increase in memory consumption with LXC containers.
In 6.4 my Ubuntu and Debian containers used ~45-50MB (256MB given).
But now in 7.0 the same containers are using ~155-160MB (512MB given)
How come?
How do you measure that? And is it RSS (resident set size) or virtual memory? And what version of Ubuntu/Debian is used there (so that we can see if we can reproduce anything).

Note that due to the switch from control groups v1 to pure v2 the accounting may be slightly different, especially if the tool cannot cope well with cgroup v2 systems. But it should not allow any more usage, we actually can limit SWAP and memory now much nicer in v2 (it was quite weird in cgroup v1).
 
How do you measure that? And is it RSS (resident set size) or virtual memory? And what version of Ubuntu/Debian is used there (so that we can see if we can reproduce anything).

Note that due to the switch from control groups v1 to pure v2 the accounting may be slightly different, especially if the tool cannot cope well with cgroup v2 systems. But it should not allow any more usage, we actually can limit SWAP and memory now much nicer in v2 (it was quite weird in cgroup v1).

Memory usage is shown in the Proxmox web interface > host > container > Summary
(also shows: Status, CPU usage, Bootdisk size etc).

LXC templates used:
ubuntu-20.04-standard_20.04-1_amd64.tar.gz
debian-10-standard_10.7-1_amd64.tar.gz

not sure what you mean by RSS and virtual memory., sorry.....
 
Memory usage is shown in the Proxmox web interface > host > container > Summary
(also shows: Status, CPU usage, Bootdisk size etc).

LXC templates used:
ubuntu-20.04-standard_20.04-1_amd64.tar.gz
debian-10-standard_10.7-1_amd64.tar.gz

not sure what you mean by RSS and virtual memory., sorry.....
Ok and thanks for the info. Also, no worries regarding those terms, memory management and accounting In Linux has tweny metrics floating around, all with a slightly different meaning, so it's always a bit hard to know them all and to choose fittings ones.
 
Ok and thanks for the info. Also, no worries regarding those terms, memory management and accounting In Linux has tweny metrics floating around, all with a slightly different meaning, so it's always a bit hard to know them all and to choose fittings ones.

Update:
just installed a fresh ct template (ubuntu-20.04-standard_20.04-1_amd64.tar.gz), assigned 256MB ram,
started it up and let it run with nothing to do and it already uses 199MB......
 
I have a number of containers that I created on PVE 6.4 using turkey-core 16.0 with docker (nesting enabled, privileged). However after upgrading to VE 7, docker won't start on these containers.

Looking at the docker logs in side the container, it looks like "overlayfs" was missing, so I added that to /etc/modules-load.d/modules.conf and rebooted the host - still doesn't work.

I paged through the release notes - didn't see anything obvious.
 
just installed a fresh ct template (ubuntu-20.04-standard_20.04-1_amd64.tar.gz), assigned 256MB ram,
started it up and let it run with nothing to do and it already uses 199MB......
Can you check inside the container, use the free -h CLI tool an please post the output here.
 
Looking at the docker logs in side the container, it looks like "overlayfs" was missing, so I added that to /etc/modules-load.d/modules.conf and rebooted the host - still doesn't work.
Overlayfs is still available, it's even used in the ISO installer (which uses the same kernel) for managing the installation environments. The aufs is not available any more in the 5.11 kernel, but docker switched away from that years ago to overlayfs, the only situation I know of where that one could be used, is when using docker in CTs on ZFS.
 
total used free shared buff/cache available
Mem: 256Mi 37Mi 158Mi 0.0Ki 59Mi 218Mi
The last column, "available" shows that this 218 MiB would be still up for use for the Container, and is used mostly by caching and some other kernel things that can be freed up if needed (in the mantra, unused ram is useless ram).
Maybe we can adapt the gauges/metrics in the web-interface a bit to reflect that a bit better.
 
  • Like
Reactions: lonsimbt
Overlayfs is still available, it's even used in the ISO installer (which uses the same kernel) for managing the installation environments. The aufs is not available any more in the 5.11 kernel, but docker switched away from that years ago to overlayfs, the only situation I know of where that one could be used, is when using docker in CTs on ZFS.
Adding overlay fixed the first issue, now it looks like it's a cgroups issue:

WARN[2021-07-06T13:53:23.416710405-05:00] Unable to find cpu cgroup in mounts
WARN[2021-07-06T13:53:23.416730635-05:00] Unable to find blkio cgroup in mounts
WARN[2021-07-06T13:53:23.416746328-05:00] Unable to find cpuset cgroup in mounts
WARN[2021-07-06T13:53:23.416818381-05:00] mountpoint for pids not found
INFO[2021-07-06T13:53:23.417613702-05:00] stopping healthcheck following graceful shutdown module=libcontainerd
INFO[2021-07-06T13:53:23.417638054-05:00] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby
INFO[2021-07-06T13:53:23.418216501-05:00] pickfirstBalancer: HandleSubConnStateChange: 0xc0007b7f30, TRANSIENT_FAILURE module=grpc
INFO[2021-07-06T13:53:23.418244858-05:00] pickfirstBalancer: HandleSubConnStateChange: 0xc0007b7f30, CONNECTING module=grpc
Error starting daemon: Devices cgroup isn't mounted
 
The last column, "available" shows that this 218 MiB would be still up for use for the Container, and is used mostly by caching and some other kernel things that can be freed up if needed (in the mantra, unused ram is useless ram).
Maybe we can adapt the gauges/metrics in the web-interface a bit to reflect that a bit better.
has anything changed in this regard from 6.4 to 7.0? because im also seeing almost double memory usage in smaller containers in 7.0. But looking at free -m in the containers shows the same usages as always. So it seems that only the gui/Proxmox shows a greater memory usage...
 
WARN[2021-07-06T13:53:23.416710405-05:00] Unable to find cpu cgroup in mounts
WARN[2021-07-06T13:53:23.416730635-05:00] Unable to find blkio cgroup in mounts
WARN[2021-07-06T13:53:23.416746328-05:00] Unable to find cpuset cgroup in mounts
WARN[2021-07-06T13:53:23.416818381-05:00] mountpoint for pids not found
INFO[2021-07-06T13:53:23.417613702-05:00] stopping healthcheck following graceful shutdown module=libcontainerd
INFO[2021-07-06T13:53:23.417638054-05:00] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby
INFO[2021-07-06T13:53:23.418216501-05:00] pickfirstBalancer: HandleSubConnStateChange: 0xc0007b7f30, TRANSIENT_FAILURE module=grpc
INFO[2021-07-06T13:53:23.418244858-05:00] pickfirstBalancer: HandleSubConnStateChange: 0xc0007b7f30, CONNECTING module=grpc
Error starting daemon: Devices cgroup isn't mounted
in general, please open a new thread for specific issues.

First you can check the following thread though, it may help in your case if you configured device passthroughs/allows:
https://forum.proxmox.com/threads/p...trough-not-working-anymore.92025/#post-400916
 
The last column, "available" shows that this 218 MiB would be still up for use for the Container, and is used mostly by caching and some other kernel things that can be freed up if needed (in the mantra, unused ram is useless ram).
Maybe we can adapt the gauges/metrics in the web-interface a bit to reflect that a bit better.

If I'm correct, version 7.0 uses a bit less memory for this ct ?
I now see with free -h 36MB used.
In version 6.4 it was around 45-50MB used.
 
=> Read the announcement (first post).
Yep. Says Debian was delayed mainly because of the installer. Mainly. That implies there are other problems that are holding the release. Not just the installer.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!