Proxmox VE 7.0 released!

i update my proxmox to version 7. The openmediavault vm as HDD in direct attach with SCSI (passtrought). But the VM failed and crash all my proxmox machine.
It seems to be an error during read/write HDD. I do not have probleme on proxmox 6.
Is there a bug on direct drive attaching to a VM?
Nothing we know of, can you please open a new thread with the VM config (qm config VMID) posted.
Also, do you see the IO errors on the Proxmox VE level or in the guest?
 
Noticed a few UI updates today but looks like this one might have been missed for those not making use of swap (this is after clearing cache and hard reload on Chrome 91.0.4472.124); note this will be hard to see:
0_Percent_Swap.png
This will probably illustrate the issue better:
0_Percent_Swap-Inspect.png
Essentially, the percentage 0% is printed in this field rather than it being ignored/drawn -- more context can be gained by comparing it to a working progress bar:
0_Percent_Swap-Inspect-deets.png
Hopefully I'm explaining the issue properly; feel free to request any additional details to replicate.
Code:
# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
pve-manager: 7.0-9 (running version: 7.0-9/228c9caa)
pve-kernel-helper: 7.0-4
pve-kernel-5.11: 7.0-3
pve-kernel-5.11.22-1-pve: 5.11.22-2
pve-kernel-5.11.21-1-pve: 5.11.21-1
ceph: 16.2.4-pve1
ceph-fuse: 16.2.4-pve1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.0.0-1+pve6
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.1.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-4
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-9
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-2
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.1-1
proxmox-backup-file-restore: 2.0.1-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.2-4
pve-cluster: 7.0-3
pve-container: 4.0-8
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-10
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.4-pve1

Edit: would like to add that the issue seems to be occurring for everything that is reading 0% rather than just the field above; noticed it on a VM details page as it was being reloaded
 
Last edited:
  • Like
Reactions: eider
Hi All upgraded my 6.4 install which has numerous lxc containers with tun tap enabled lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file

I can confirm no issues, I don't have any cgroup declarations or lines in the container conf file and all works fine.
 
Hi,
I did an upgrade to version 7 of my test system. Upgrade itself was uneventful, but after rebooting the first container didn't start.
After starting it in debug it shows:
Code:
pct start 100 --debug 1

run_buffer: 316 Script exited with status 1
lxc_init: 816 Failed to run lxc.hook.pre-start for container "100"
__lxc_start: 2007 Failed to initialize container "100"
: type g nsid 0 hostid 100000 range 65536
INFO     lsm - lsm/lsm.c:lsm_init_static:40 - Initialized LSM security driver AppArmor
INFO     conf - conf.c:run_script_argv:332 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "100", config section "lxc"
DEBUG    conf - conf.c:run_buffer:305 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 100 lxc pre-start produced output: Can't call method "unified_cgroupv2_support" on an undefined value at /usr/share/perl5/PVE/LXC/Setup.pm line 428.

DEBUG    conf - conf.c:run_buffer:305 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 100 lxc pre-start produced output: error in setup task PVE::LXC::Setup::unified_cgroupv2_support

ERROR    conf - conf.c:run_buffer:316 - Script exited with status 1
ERROR    start - start.c:lxc_init:816 - Failed to run lxc.hook.pre-start for container "100"
ERROR    start - start.c:__lxc_start:2007 - Failed to initialize container "100"
INFO     conf - conf.c:run_script_argv:332 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "100", config section "lxc"
startup for container '100' failed

I traced this back to the container having the setting:
ostype: unmanaged
That isn't in the list, and the script fails.
I changed the ostype to debian and the container starts fine.

Is this known behaviour or did I run into a bug?

Regards,

Mark
 
With this new release, can you confirm that AMD Ryzen Gen 3 arch will work (without the need to compile any kernels or stuff?)
I am going to get a new AMD Ryzen 7 5800x
 
I traced this back to the container having the setting:
ostype: unmanaged
That isn't in the list, and the script fails.
I changed the ostype to debian and the container starts fine.
managed to reproduce the issue - could you please open a bug report over at https://bugzilla.proxmox.com providing the details from your post ?

Thanks!
 
As you need the CLI for upgrading to a new major configuration anyway and the MAC address needs only to change for the interface your host communicates over, and that only if you're in a restricted network (e.g, when rented server by a hosting provider).
Because of the required hwaddress for bridges (in restricted networks) it is currently not impossible to use the Proxmox installer since it creates a bridge per default and does not set the hwaddress of the NIC selected during installation.
I think that the installer should contain an option that sets the hwaddress of the selected NIC to the default bridge it creates to allow using the Proxmox installer for remote servers with restricted hosting provider networks. Otherwise it cannot be used anymore to setup systems this way.

Is that something that could be added in the next version of the PVE7 ISO?

There is also the issue that, in some enviroments, vkvm access is only possible with virtually booting the system disks into kvm and then running the installer there. I personally dislike it this way since it makes things complicated but it is being used this way.

I'll take care of opening a feature request once I find time to do that.

Thanks!
 
Last edited:
is the chrony package installed?
check the release notes:

https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_7.0

I hope this helps!
I've changed this and during the installation of chrony, timesyncd will be removed. But now i see an death service "systemd-timesyncd.service" in the proxmox webinterface. The service is removed and the Unitfile does not exist anymore on the system. How i can remove the failed entry in the webinterface?

Very Thanks.
 
has anything changed in this regard from 6.4 to 7.0? because im also seeing almost double memory usage in smaller containers in 7.0. But looking at free -m in the containers shows the same usages as always. So it seems that only the gui/Proxmox shows a greater memory usage...
Seeing same issue here. Looks to be caused by memory usage shown based on total used memory, including caches, which is inconsistent with how PVE shows host memory usage (where it ignores cache/buffers) and with how it was before.

Code:
# free -m
               total        used        free      shared  buff/cache   available
 Mem:           6144        2760         522           9        2860        3383
While UI shows ~91% memory usage.
chrome_2021-07-10_21-04-29.png

In addition, it seems that PVE is no longer tracking any IO writes for LXC containers, it just stays on 0. Reads look fine as well as Writes on VMs.
chrome_2021-07-10_21-06-49.png
 
Last edited:
Seeing same issue here. Looks to be caused by memory usage shown based on total used memory, including caches, which is inconsistent with how PVE shows host memory usage (where it ignores cache/buffers) and with how it was before.

Code:
# free -m
               total        used        free      shared  buff/cache   available
 Mem:           6144        2760         522           9        2860        3383
While UI shows ~91% memory usage.
View attachment 27539
I can't confirm that here. I have several Windows systems running here. Memory behaves normally.
 

Attachments

  • Bildschirmfoto vom 2021-07-11 10-03-21.png
    Bildschirmfoto vom 2021-07-11 10-03-21.png
    29.2 KB · Views: 10
I've changed this and during the installation of chrony, timesyncd will be removed. But now i see an death service "systemd-timesyncd.service" in the proxmox webinterface. The service is removed and the Unitfile does not exist anymore on the system. How i can remove the failed entry in the webinterface?
Not really, but as mentioned in some other post, we'll improve the visuals for the time-sync providing daemons.
 
I think that the installer should contain an option that sets the hwaddress of the selected NIC to the default bridge it creates to allow using the Proxmox installer for remote servers with restricted hosting provider networks. Otherwise it cannot be used anymore to setup systems this way.
Sure it can, you just setup it as normally and then change that afterwards, I mean, one is already using iKVM/IPMI if installing PVE on a dedicated server in a hosting provider setup, as else you couldn't use the Proxmox VE installer anyway.

That said, does not need to be an option, just makes it complicated - can be done unconditionally.
 
Anyone else seeing huge increase in CPU load post-upgrate to v7?
I can confirm these observations however they do not seem limited to VMs, this is screenshot from very much idle node that runs exclusively CTs:
chrome_2021-07-11_13-26-49.png

Here are screenshots from two VMs on other node:
chrome_2021-07-11_13-22-49.png
chrome_2021-07-11_13-23-02.png

And overall CPU usage on that node:
chrome_2021-07-11_13-23-54.png

This is nothing to be blamed on PVE however I am afraid, I can observe different power management from kernel resulting in overall lower power consumption too. Specifically, the power usage is less uniform and more aggressively on-demand (lower power state is achieved far more often with more aggressive jumps to higher states) which results in lower mean power usage over period of time - the specific change in power usage would depend on your previous workload.
 
This is nothing to be blamed on PVE however I am afraid, I can observe different power management from kernel resulting in overall lower power consumption too.
That comes from the change of the default CPU scheduler from Performance (always the highest base clock possible) to schedutil (depends on load but is pretty good at providing good performance while still being able to safe power if (a few) cores are idling).

We may move the default back though, as it seems that some specific VM-load still cannot really cope with those freq-changes, and for hyper-visors it can really be argued for defaulting to performance (even if some may still prefer schedutil, e.g., if in a homelab or other energy-conscious environment).
This change came actually with the 5.11 kernel, so also in PVE 6, but there, most initial feedback seemed well.
 
  • Like
Reactions: Bunnikins
Sure it can, you just setup it as normally and then change that afterwards, I mean, one is already using iKVM/IPMI if installing PVE on a dedicated server in a hosting provider setup, as else you couldn't use the Proxmox VE installer anyway.

That said, does not need to be an option, just makes it complicated - can be done unconditionally.
It was just a suggestion. I don't need this option but others might do and nail the PVE installer as unusable because the system isn't available after boot. In my opinion an installer should cover settings like these but after all it's your software and your job to make customers happy. :)
 
That comes from the change of the default CPU scheduler from Performance (always the highest base clock possible) to schedutil (depends on load but is pretty good at providing good performance while still being able to safe power if (a few) cores are idling).

We may move the default back though, as it seems that some specific VM-load still cannot really cope with those freq-changes, and for hyper-visors it can really be argued for defaulting to performance (even if some may still prefer schedutil, e.g., if in a homelab or other energy-conscious environment).
This change came actually with the 5.11 kernel, so also in PVE 6, but there, most initial feedback seemed well.
Maybe this could be a configurable option that people can choose to set themself. In hyperconverges setups it's advisable to leave the CPU scheduler set to performance for the sake of latency. This is at least what we experienced over the last few years.
 
That comes from the change of the default CPU scheduler from Performance (always the highest base clock possible) to schedutil (depends on load but is pretty good at providing good performance while still being able to safe power if (a few) cores are idling).

We may move the default back though, as it seems that some specific VM-load still cannot really cope with those freq-changes, and for hyper-visors it can really be argued for defaulting to performance (even if some may still prefer schedutil, e.g., if in a homelab or other energy-conscious environment).
This change came actually with the 5.11 kernel, so also in PVE 6, but there, most initial feedback seemed well.
Thanks for the update, if it is CPU scheduling then that's fine - happy for it to be power saving.

Having said that, is there a straightforward way to change the scheduler if we get issues?

Thanks!
 
Having said that, is there a straightforward way to change the scheduler if we get issues?
Same as on any other Linux machine.

Code:
echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

---
I can't confirm that here. I have several Windows systems running here. Memory behaves normally.
Appreciate the effort however the issue in question is for CTs, not VMs.

---
In addition, it seems that PVE is no longer tracking any IO writes for LXC containers, it just stays on 0. Reads look fine as well as Writes on VMs.
Can we get any update on this? While memory issue isn't a big deal to wait on, this one is rather annoying as it removes ability to properly monitor what happens inside CTs.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!