Proxmox VE 7.0 released!

Thanks for the update, if it is CPU scheduling then that's fine - happy for it to be power saving.

Having said that, is there a straightforward way to change the scheduler if we get issues?

Thanks!
It looks like this is changed with the cpupower tool:
Code:
cpupower frequency-set -g SCHEDULER
# Examples
cpupower frequency-set -g performance
cpupower frequency-set -g schedutil
 
  • Like
Reactions: Bunnikins
Sure it can, you just setup it as normally and then change that afterwards, I mean, one is already using iKVM/IPMI if installing PVE on a dedicated server in a hosting provider setup, as else you couldn't use the Proxmox VE installer anyway.
AFAIK in Hetzner using iKVM/IPMI console is not free and fast available service, so I install all my nodes with Hetzner's Debian distro and then apt-get PVE from repositories (as Hetzner's HOWTO recommends). For now I can't install PVE on Hetzner's dedicated server because of Debian 11 is not released and available for installimage there, and I can't upgrade PVE 6 to 7 version also because I have all my VM/CTs backups in PBS co-installed with PVE and not supported by PVE7 now.

So, I'll wait until Debian 11 image got to Hetzner and/or PVE installation procedures got more elastic and mature, while playing with PVE7 in small local testing hosts. Or is any other good way to install PVE on ZFS to Hetzner's dedicated server?
 
Last edited:
Update from 6.4 to 7.x went fine, when asked from installer I used maintener version of files (about 4 times this occured during upgrading)..
Also, all VM are running fine (Linux, Windows, Containers).

The only problem which I can find is (marked with red color):
1626036118240.png
Should I do anything regarding these 2 services or is this normal behaive?

Also question... I got warning in Repositories section...
1626036184252.png
Should I delete line where is no "pve-no-subscription" or enable it?.. And regarding "buster" which is used in VM (I know that is old, but is working), should I try to update VM maschine also and delete this line?

Thank you for answers..
 
Last edited:
AFAIK in Hetzner using iKVM/IPMI console is not free and fast available service, so I install all my nodes with Hetzner's Debian distro and then apt-get PVE via repositories (as Hetzner's HOWTO recommends). For now I can't install PVE on Hetzner's dedicated server because of Debian 11 is not released and avilable for installimage there, and I can't upgrade PVE 6 to 7 version also because I have all my VM/CTs backups in PBS co-installed with PVE and not supported by PVE7 now.

So, I'll wait untill Debian 11 image got to Hetzner and/or PVE installation procedures get more elastic and mature, while playing with PVE7 in small local testing hosts. Or is any other good way to install PVE on ZFS to Hetzner's dedicated server?
installing worked fine for me with:
- installimage, debian10.9 with mdadm raid, install pve6, upgrade to pve7, set hwaddress for main bridge, reboot, done
- rescue, install qemu, write down nic name and MAC of main NIC, download pve7 iso, start qemu VM with pve installer, install with ZFS or btrfs raid, reboot VM after setup (not Server), Login and correct NIC name and set hwaddress to vmbr0, shutdown VM, reboot server, done

Personally I prefer dedicated small DC ssds disks for the OS with mdadm and dedicated DC nvmes for storage with ZFS. fastes way to setup and maintain.
 
Last edited:
Hi,
Update from 6.4 to 7.x went fine, when asked from installer I used maintener version of files (about 4 times this occured during upgrading)..
Also, all VM are running fine (Linux, Windows, Containers).

The only problem which I can find is (marked with red color):
View attachment 27564
Should I do anything regarding these 2 services or is this normal behaive?
chrony is the new default instead of systemd-timesyncd, but AFAIK if you want to switch, you need to manually set it up for existing setups. What does systemctl status corosync.service show? Can you start the service? If not, please share the output of journalctl -b0 -u corosync.service.

Also question... I got warning in Repositories section...
View attachment 27565
Should I delete line where is no "pve-no-subscription" or enable it?.. And regarding "buster" which is used in VM (I know that is old, but is working), should I try to update VM maschine also and delete this line?
The no-subscription repository is disabled, so no problem. And If the InfluxDB buster repository works, you can safely ignore that error too. But in general one should avoid mixing suites, so once the InfluxDB bullseye repository is available, you should switch to that.
 
Hello.
Is it safe to upgrade remotely a Proxmox instance 6.4 on ZFS (/ too)??
pve6to7 --full all passed.
 
That comes from the change of the default CPU scheduler from Performance (always the highest base clock possible) to schedutil (depends on load but is pretty good at providing good performance while still being able to safe power if (a few) cores are idling).

We may move the default back though, as it seems that some specific VM-load still cannot really cope with those freq-changes, and for hyper-visors it can really be argued for defaulting to performance (even if some may still prefer schedutil, e.g., if in a homelab or other energy-conscious environment).
This change came actually with the 5.11 kernel, so also in PVE 6, but there, most initial feedback seemed well.
I'm currently testing some vm workload with spiky workload, and I notice latency increase too. I think we should indeed use performance by default, and maybe add an option somewhere to change it if user want it (maybe through pvestatd or another daemon)
 
  • Like
Reactions: Ansy
I'm currently testing some vm workload with spiky workload, and I notice latency increase too. I think we should indeed use performance by default, and maybe add an option somewhere to change it if user want it (maybe through pvestatd or another daemon)
Already switched back with latest kernel (will be available soon on public repos).
 
Hi,

chrony is the new default instead of systemd-timesyncd, but AFAIK if you want to switch, you need to manually set it up for existing setups. What does systemctl status corosync.service show? Can you start the service? If not, please share the output of journalctl -b0 -u corosync.service.


The no-subscription repository is disabled, so no problem. And If the InfluxDB buster repository works, you can safely ignore that error too. But in general one should avoid mixing suites, so once the InfluxDB bullseye repository is available, you should switch to that.
Hi,

on writing

systemctl status corosync.service

I got:

1626092165964.png

The status on services is:
1626092412557.png

So the crony is installed ( I used instructions https://www.tecmint.com/install-chrony-in-centos-ubuntu-linux/), after that chrony is installed and running.

If I tried to run corosync nothing happens, status is allways dead. "systemd-timesyncd" is now also "disabled" - after chrony installation, assuming that this is OK.

If I run

journalctl -b0 -u corosync.service

I got:
1626092655354.png

I am newby and do not know what this is meaning...

BR,
Simon
 
Hi,

on writing

systemctl status corosync.service

I got:

View attachment 27590
The corosync configuration file does not exist. But if your node is not part of a cluster, you can safely ignore this.

The status on services is:
View attachment 27592

So the crony is installed ( I used instructions https://www.tecmint.com/install-chrony-in-centos-ubuntu-linux/), after that chrony is installed and running.

If I tried to run corosync nothing happens, status is allways dead. "systemd-timesyncd" is now also "disabled" - after chrony installation, assuming that this is OK.
Yes, you don't want two time-syncing services running at the same time.
 
  • Like
Reactions: kslbe
The corosync configuration file does not exist. But if your node is not part of a cluster, you can safely ignore this.


Yes, you don't want two time-syncing services running at the same time.
So, all is good... uff.. thank you :) Can be "systemd-timesyncd" somehow deleted from service list?
 
So, all is good... uff.. thank you :) Can be "systemd-timesyncd" somehow deleted from service list?
Not really. It's part of the services our API checks the status of, so there will be an entry for it.
 
Hello,
can i add Proxmox 7.0 node into 6.1-3 cluster ?
I want reinstall existing nodes, as i need to reformat their local storages (VMs are running on local storages with 10GBit network).
So the idea is to Join 6.1 Cluster with new installation of 7.0 node, live migrate all VMs onto it, then reinstall the 6.1-3 to 7.0 and migrate VMs back.
I have noticed that there are corosync 3.0.2-pve4 on 6.1 and corosync 3.1.2-pve2 on 7.0, will those packages cooperate
?
 
Last edited:
can i add Proxmox 7.0 node into 6.1-3 cluster ?
No. Proxmox VE needs to be updated to latest 6.4 before being able to upgrade/work with 7.0 for the update.

So the idea is to Join 6.1 Cluster with new installation of 7.0 node, live migrate all VMs onto it, then reinstall the 6.1-3 to 7.0 and migrate VMs back.
Upgrade to 6.4 first, then that could work.

I have noticed that there are corosync 3.0.2-pve4 on 6.1 and corosync 3.1.2-pve2 on 7.0, will those packages cooperate
?
Maybe, maybe not - nobody tested that here. We only test upgrading from latest 6.4 - you may have luck, but zero guarantees from our side.
 
Did a fresh install of "7" or a rack system connected to a Raritan VGA KVM, but once the system changes video mode to get to the login prompt, I got a "Invalid video mode" message on the KVM.

This used to work on Proxmox 6. Any additional configuration change I need?

Thanks!
 
If this happened deterministically with ifupdown - could you please create a new thread and post your /etc/network/interfaces which caused the problem - thanks!


well /sbin/ip does link to libbpf so this might happen - check with `ldd /bin/ip`
However a not properly working /bin/ip could also cause quite many problems regarding network config ... (so maybe the problem ifupdown/ifupdown2 is just a result of this issue?)
That was only the first "lab" system (And yes, it was `ldd` I used to track that problem ;) ), the 2nd/3rd cluster installation/upgrade had no netdata
 
No. Proxmox VE needs to be updated to latest 6.4 before being able to upgrade/work with 7.0 for the update.


Upgrade to 6.4 first, then that could work.


Maybe, maybe not - nobody tested that here. We only test upgrading from latest 6.4 - you may have luck, but zero guarantees from our side.
Thank you for the info.
One more queston , please. Have you tested 6.4 together with 7.0 in one cluster ? corosync package is the same in 6.1 and 6.4 - 3.0.2-pve4
 
Did a fresh install of "7" or a rack system connected to a Raritan VGA KVM, but once the system changes video mode to get to the login prompt, I got a "Invalid video mode" message on the KVM.

This used to work on Proxmox 6. Any additional configuration change I need?
Just for clarity: you manage to install PVE 7.0 just fine, but afterwards, once you boot and get to the normal console login it breaks?
 
corosync package is the same in 6.1 and 6.4 - 3.0.2-pve4
No it is not, if then you did not really upgrade to 6.4 ...

Latest PVE 6.4 versions are:
Code:
pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.124-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-4
pve-kernel-helper: 6.4-4
ceph: 15.2.13-pve1~bpo10
ceph-fuse: 15.2.13-pve1~bpo10
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown2: 3.0.0-1+pve4~bpo10
ksmtuned: 4.20150325+b1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-network-perl: 0.6.0
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.12-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
pve-zsync: 2.2
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1

So corosync is 3.1.2 there, just like in Proxmox VE 7.0.

Cluster 6.4 and 7.0 is possible, but only for migrating guests from old -> new for the upgrade, the other direction is not supported (may work offline, but not for live-migration!).
 
  • Like
Reactions: Stoiko Ivanov
You're right, while accounting itself was correctly setup from the outside, there was a "display bug" in upstream LXCFS when running with cgroupv2, thus the swap values where shown as zero to container inside tools.

Will be fixed with lxcfs version 4.0.8-pve2, which is currently making its way through the repositories.

Can confirm, reporting correct values now on my homeserver containers.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!