Proxmox VE 7.0 released!

On my local ryzen machine I set the min clock to base clock speed as I been fighting kvm kernel module pagefaults that occurred on low or idle loads. Disabling all c-states kept power draw too high, so did this instead.

Using the tool interestingly performance is better at schedutil vs performance, it seems to turbo up quicker. But I expect schedutil without the min speed held high might take longer to ramp out of idle clocks hence the experience on here.
Look into tuned (and tuned-adm), it may help with configuring your system with the above in mind (this is what I make use of and installed cpupower in my case);
apt install tuned tuned-utils tuned-utils-systemtap -y
Redhat doc: TUNED-ADM | https://blog.eldernode.com/performance-tuning-and-optimize-debian/
 
hi,
after after installation of Proxmox VE 7,I open control web with https://ip:8006,every thing works fine in desktop pc, but I get white screen in android phone browser(chrome).
and there is a js error:1628919310801.png
 
Just a late report from updating my training lab Intel NUC. Upgraded from 6.4 to 7 worked just fine.

As PBS 1 didn't work anymore due to dependency issues as it is intended for Debian Buster I waited for PBS 2 and then backed up all VMs again to a local USB connected SATA hard disk. Then I replaced the XFS filesystem for local PVE storage with BTRFS and restored all VMs to it. So you have one tester for BTRFS for VM storage. This Intel NUC has only one SSD so no BTRFS RAID 1. I am a long term BTRFS user and I am willing to deal with issues should any arise. So far it is working fine.

I did not change the Debian installation on top of which I installed PVE. This is still using XFS filesystem.
 
ERROR (Broken pipe) in install PVE7.0 @ dbn11MATE

REPROSTEPS:
dbn clean install: <debian-live-11.0.0-amd64-mate+nonfree.iso>
pve: <https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye>
steps followed and all seemed fine, but after
root@myhost:~# apt install proxmox-ve postfix open-iscsi

an error as below appeared in processing package: 187-pve-qemu-kvm_6.0.0-3_amd64.deb

Code:
...
Preparing to unpack .../187-pve-qemu-kvm_6.0.0-3_amd64.deb ...
Unpacking pve-qemu-kvm (6.0.0-3) ...
dpkg: error processing archive /tmp/apt-dpkg-install-XQEq63/187-pve-qemu-kvm_6.0.0-3_amd64.deb (--unpack):
 trying to overwrite '/usr/share/applications/qemu.desktop', which is also in package qemu-system-data 1:5.2+dfsg-11
dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)
...
...
Errors were encountered while processing:
 /tmp/apt-dpkg-install-XQEq63/187-pve-qemu-kvm_6.0.0-3_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
...
root@myhost:~#

EXPECTED: No errors as in PVE6.4 @ dbn10MATE

EDIT: When NETINSTALL used = <firmware-11.0.0-amd64-netinst.iso>
instead of LIVE, all is OK and PVE7.0 installs without issues.

Many thanks for any possible feedback.
 
Last edited:
Just a late report from updating my training lab Intel NUC. Upgraded from 6.4 to 7 worked just fine.

As PBS 1 didn't work anymore due to dependency issues as it is intended for Debian Buster I waited for PBS 2 and then backed up all VMs again to a local USB connected SATA hard disk. Then I replaced the XFS filesystem for local PVE storage with BTRFS and restored all VMs to it. So you have one tester for BTRFS for VM storage. This Intel NUC has only one SSD so no BTRFS RAID 1. I am a long term BTRFS user and I am willing to deal with issues should any arise. So far it is working fine.

I did not change the Debian installation on top of which I installed PVE. This is still using XFS filesystem.
What about nocow/nodatacow BTRFS options for VM images in your setup?
 
Please post the output of the pveversion -v command
root@pve:~# pveversion -v proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve) pve-manager: 7.0-8 (running version: 7.0-8/b1dbf562) pve-kernel-5.11: 7.0-3 pve-kernel-helper: 7.0-3 pve-kernel-5.11.22-1-pve: 5.11.22-2 ceph-fuse: 15.2.13-pve1 corosync: 3.1.2-pve2 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown2: 3.0.0-1+pve5 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.21-pve1 libproxmox-acme-perl: 1.1.1 libproxmox-backup-qemu0: 1.2.0-1 libpve-access-control: 7.0-4 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.0-4 libpve-guest-common-perl: 4.0-2 libpve-http-server-perl: 4.0-2 libpve-storage-perl: 7.0-7 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 4.0.9-2 lxcfs: 4.0.8-pve1 novnc-pve: 1.2.0-3 proxmox-backup-client: 2.0.1-1 proxmox-backup-file-restore: 2.0.1-1 proxmox-mini-journalreader: 1.2-1 proxmox-widget-toolkit: 3.2-4 pve-cluster: 7.0-3 pve-container: 4.0-5 pve-docs: 7.0-5 pve-edk2-firmware: 3.20200531-1 pve-firewall: 4.2-2 pve-firmware: 3.2-4 pve-ha-manager: 3.3-1 pve-i18n: 2.4-1 pve-qemu-kvm: 6.0.0-2 pve-xtermjs: 4.12.0-1 qemu-server: 7.0-7 smartmontools: 7.2-1 spiceterm: 3.2-2 vncterm: 1.7-1 zfsutils-linux: 2.0.4-pve1
 
@legend I checked from Android System 7.1.2 Google Chrome version 92.0.4515.159 and Firefox work well, what is your android version and Chrome version?
 
What about nocow/nodatacow BTRFS options for VM images in your setup?
I did not consider it as I am using an SSD. Also nocow/nodatacow would prevent BTRFS snapshots, which if I understand correctly also prevents VM snapshots, cause PVE is using BTRFS snapshots to snapshot the VM raw disk image files it put in dedicated subvolumes.

From my previous experience with storing VM disk image files on BTRFS I'd not do this with spinning disks anyway. This experiences have been a long time ago but back then they files fragmented to a point where the VM was basically unuseable. Additionally "autodefragt" did not help much. As BTRFS is still using COW by default I do not expect that this has changed.
 
Last edited:
Are there still a lot of packages that are not available for Bullseye?

Is it advisable to keep using Buster for the time being, allowing some time for Bullseye to catch up?
 
Are there still a lot of packages that are not available for Bullseye?
All relevant packages are available since the release of Proxmox VE, or what do you miss?

Is it advisable to keep using Buster for the time being, allowing some time for Bullseye to catch up?
A Debian release won't gain any new source packages after the time point of the soft freeze, that was 2021-02-12 for Bullseye.

Proxmox VE (and other projects) often upload new packages for its own stack or newer package versions for packages previously provided directly by Debian, but normally only if deemed to fix actual issues and or bring new very interesting/useful features.
But directly after an initial Debian release, all its package versions are quite up to date, so now it's not planned to do so for any package.
 
  • Like
Reactions: Whitterquick
Thank you Thomas, there just seems to be a few things people are complaining about not working. It took a while and some 3rd party patches to get things almost perfect the last time around but now I am starting again, so was just wondering if I should go with the tried and tested. Usually first released of most things will have bugs. I always thought, although based on Debian, Proxmox follows Ubuntu kernel and updates, meaning it won’t freeze the same way as Debian. Obviously you can tell I’m a Linux noob so just asking questions before my rebuild :)
 
Thank you Thomas, there just seems to be a few things people are complaining about not working.
With the flexibility of Proxmox VE, the wide range of different hardware, changing software (be it ours or the one actually running in the virtual guests) and big differences in the needs of users there always will be some complaints about issues in a wider sense (bugs, enhancement request, setup/configuration troubles, ...) :)

Fact is that we already see many tens of thousands of accesses from different systems on our Proxmox VE 7.x Bullseye repo infrastructure, compared to the count of problem-posts that does not seem like an outstanding amount of noticeable issues. To clarify, I'm certainly not saying that there are none issue whatsoever, just that there are not significantly more than any other release (e.g., 6.4) has, and that we already fixed or actively work on known and reproducible ones, like we always try to do.
It took a while and some 3rd party patches to get things almost perfect the last time around but now I am starting again, so was just wondering if I should go with the tried and tested.
What 3rd party patches do you need? And where, the kernel? If you have such patches applied it'd surely be worth it to first test an upgrade on test hardware, if possible, as we obviously cannot test any imaginable 3rd party patch.

Usually first released of most things will have bugs.
FWIW: All but the most trivial software have bugs, but yes you're right in that any new, and especially bigger, release can contain new bugs, sometimes only affecting certain setups. That's why we recommend testing out any newer release before upgrading important production service. Ideally using similar HW than the actual setup(s), or at least doing so in a virtual setup, modelled after the actual setup as close as possible.

I always thought, although based on Debian, Proxmox follows Ubuntu kernel and updates, meaning it won’t freeze the same way as Debian.
We only base off the Ubuntu Kernel, albeit with a few patches on top and our own release; mostly as we want a bit more recent kernel releases that Debian provides for improved HW support, but also because (server) HW certification exists already for the Ubuntu kernel. The remaining packages are coming either from ours (core PVE packages) or Debian's repositories.

Obviously you can tell I’m a Linux noob so just asking questions before my rebuild :)
I honestly cannot really tell that, but answering the question would need some more specific details about your setup, like the ones about 3rd party patches asked above. It would probably be better to post that and the concerns you have in its own thread to avoid crowding the more general release thread.
 
  • Like
Reactions: Whitterquick
I upgraded two hybrid clusters, both running Ceph, from Proxmox 6 (Debian Buster) to Proxmox 7 (Debian Bullseye).

The following the two wiki pages had everything needed to upgrade:

* https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0

* https://pve.proxmox.com/wiki/Ceph_Nautilus_to_Octopus


The first cluster upgrade went perfect. I was able to keep KVMs running and migrate them live from old to upgraded Proxmox hosts fine.

The second hybrid cluster went fine with the Ceph migration. The Bullseye update went fine too, until reboot. Conveniently, systemd renamed various ethernet interfaces (again). The bonded network interfaces were broken. After getting everything renamed, the system worked fine. I also had one host that somehow masked networking startup, as well, which I fixed ala:

Code:
systemctl unmask networking

So overall, it went really well, except that one issue, which is probably upstream Debian/systemd, not Proxmox.

Thanks!
 
  • Like
Reactions: Whitterquick

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!