Proxmox VE 8.1 released!

I've run into an odd breaking issue with SPICE audio.

If I add an ich9-intel-hda using the SPICE backend driver to a VM, that VM will fail to start with QEMU Exit Code 1 in the GUI log.

I had to try to start it via the CLI to see that the audio driver was the issue.
I'm reinstalling a VM now (I thought it was corrupted and deleted it), but after that's done I'll try to recreate the error and report the exact output back.

EDIT - Here's the error:
Code:
# qm start 10999
audio: Could not init `spice' audio driver
start failed: QEMU exited with code 1
Could you please create a new thread? Mention me there and please post the VM config. qm config 10999
 
So, i have the community edition (8.0.3) and via apt not able to do the upgrade. Simply no new packages / updates are being found. What do i have to do, that i can make the upgrade to 8.1? I have just the repositories that came with 8.0.
 
After Upgrade Host is running very high Load:

1700911311446.png

System contains 2x 2TB NVMe and 4x 8TB Samsung SSD, all running ZFS Pools (No zpool upgrade 2.1 --> 2.2 done yet).

In iotop nothing unusual to see, low Disk Activity and no zpool scrub running.
 
8.0 to 8.1:
TASK ERROR: EFI base image '/usr/share/pve-edk2-firmware//AAVMF_CODE.fd' not found
Check if apt update and apt dist-upgrade will give you these 3 packages. It seems they were added later on.
 
After Upgrade Host is running very high Load:

View attachment 58784

System contains 2x 2TB NVMe and 4x 8TB Samsung SSD, all running ZFS Pools (No zpool upgrade 2.1 --> 2.2 done yet).

In iotop nothing unusual to see, low Disk Activity and no zpool scrub running.
Same here on my node on little 1L server running nvme+ssd 1TB zfs. Machine is just idle, no machines running.. Something seems to add +1 Load average on top of what already happends. Cpu usage is nonexistent.
1700915643625.png
On my other nodes (44c, 88t cpu) I don't see any difference.
 
I just would like to be abundantly clear on the upgrade path. We've a cluster running 8.0.4, with ceph (quincy). 3 nodes.

Besides the normal precautions (vigorous backups with verify's and testing), this is really as simple as saying "upgrade" button and letting it fly? And then seperately so the ceph upgrade as shown in the docs?

During upgrading of ceph, it's ok that some hosts are running quincy and others are running reef for a short period while upgrading?
 
Hi, I have a problem with kernel version 6.5.11-4-after migration VMs freeze. I reverted back to version 6.2.16-19-pve but the problem with that version is that VMs sometimes freeze on Init RAM when I boot.

How do I fix the problem please?
this is an error from the VM
rcu-error.png
 
I had the same issue since I had to install a DKMS module for my realtek NIC in earlier versions of proxmox, I removed the R8168 blacklist and just let it use the kernel default driver and its been working on the 6.5 Kernel with no issues.
You have to use the new version of the driver. You can look it up in the changelog of R8168-dkms package:

r8168 (8.051.02-3) unstable; urgency=medium

* Add patch for Linux 6.5.
The problem: it requires dkms >= 3.0.11

So you have to do the following:

apt-get install linux-headers-6.5.11-4-pve (to install the new headers)
download the dkms 3.0.12-1 package from here: LINK
download the r8168-dkms 8.052.01-1 from here: LINK
copy it to the proxmox server
apt-get install ./dkms_3.0.12-1_all.deb
apt-get install ./r8168-dkms_8.052.01-1_all.deb

And voila - it is working again without risking the old bug occuring after some weeks of uptime.
 
After Upgrade Host is running very high Load:

View attachment 58784

System contains 2x 2TB NVMe and 4x 8TB Samsung SSD, all running ZFS Pools (No zpool upgrade 2.1 --> 2.2 done yet).

In iotop nothing unusual to see, low Disk Activity and no zpool scrub running.
If AutoTrim is enabled on your SSD ZFS pool, try turning it off and reboot your host.
zpool autotrim=off yourpool

High load due to vdev_autotrim process in "D" state (uninterruptible sleep)

autotrim=on
Code:
ps aux | grep " [RD]"
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         417  0.0  0.0      0     0 ?        D<   11:48   0:00 [vdev_autotrim]
root        7592  0.0  0.0  12692  4096 pts/0    R+   11:57   0:00 ps aux

autotrim=off
Code:
ps aux | grep " [RD]"
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root        5288  0.0  0.0  12564  4224 pts/0    R+   12:08   0:00 ps aux
 
Last edited:
  • Like
Reactions: Odmin and j0n1th4n
You have to use the new version of the driver. You can look it up in the changelog of R8168-dkms package:


The problem: it requires dkms >= 3.0.11

So you have to do the following:

apt-get install linux-headers-6.5.11-4-pve (to install the new headers)
download the dkms 3.0.12-1 package from here: LINK
download the r8168-dkms 8.052.01-1 from here: LINK
copy it to the proxmox server
apt-get install ./dkms_3.0.12-1_all.deb
apt-get install ./r8168-dkms_8.052.01-1_all.deb

And voila - it is working again without risking the old bug occuring after some weeks of uptime.

Hollo,
I tried this but in my case, installing headers failed :
root@pve:~# apt-get install linux-headers-6.5.11-4-pve
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Note, selecting 'linux-headers-6.5.11-4-pve-amd64' for regex 'linux-headers-6.5.11-4-pve'
Note, selecting 'proxmox-headers-6.5.11-4-pve' instead of 'linux-headers-6.5.11-4-pve-amd64'
proxmox-headers-6.5.11-4-pve is already the newest version (6.5.11-4).
proxmox-headers-6.5.11-4-pve set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n]
Setting up r8168-dkms (8.051.02-2) ...
Removing old r8168-8.051.02 DKMS files...
Module r8168-8.051.02 for kernel 6.2.16-19-pve (x86_64).
Before uninstall, this module version was ACTIVE on this kernel.

r8168.ko:
- Uninstallation
- Deleting from: /lib/modules/6.2.16-19-pve/updates/dkms/
- Original module
- No original module was found for this module on this kernel.
- Use the dkms install command to reinstall any previous module version.
depmod...
Deleting module r8168-8.051.02 completely from the DKMS tree.
Loading new r8168-8.051.02 DKMS files...
Building for 6.2.16-19-pve 6.5.11-4-pve
Building initial module for 6.2.16-19-pve
Done.

r8168.ko:
Running module version sanity check.
- Original module
- No original module exists within this kernel
- Installation
- Installing to /lib/modules/6.2.16-19-pve/updates/dkms/
depmod...
Building initial module for 6.5.11-4-pve
Error! Bad return status for module build on kernel: 6.5.11-4-pve (x86_64)
Consult /var/lib/dkms/r8168/8.051.02/build/make.log for more information.
dpkg: error processing package r8168-dkms (--configure):
installed r8168-dkms package post-installation script subprocess returned error exit status 10
Processing triggers for initramfs-tools (0.142) ...
update-initramfs: Generating /boot/initrd.img-6.5.11-4-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
System booted in EFI-mode but 'grub-efi-amd64' meta-package not installed!
Install 'grub-efi-amd64' to get updates.
Errors were encountered while processing:
r8168-dkms
E: Sub-process /usr/bin/dpkg returned an error code (1)

Can you help ?
Thanks
 
Have a problem with live mirgration after update.
Node8: PVE 8.0, Linux 6.2.16-15-pve #1 SMP PREEMPT_DYNAMIC PMX 6.2.16-15 (2023-09-28T13:53Z)
proxmox-ve: 8.0.2 (running kernel: 6.2.16-15-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
proxmox-kernel-helper: 8.0.3
pve-kernel-5.15: 7.4-6
pve-kernel-5.13: 7.1-9
proxmox-kernel-6.2.16-15-pve: 6.2.16-15
proxmox-kernel-6.2: 6.2.16-15
pve-kernel-5.15.116-1-pve: 5.15.116-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx5
ksm-control-daemon: residual config
ksmtuned: 4.20150326+b1
libjs-extjs: 7.0.0-4
libknet1: 1.26-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.5
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.9
libpve-guest-common-perl: 5.0.5
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.5
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
openvswitch-switch: 3.1.0-2
proxmox-backup-client: 3.0.3-1
proxmox-backup-file-restore: 3.0.3-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.2
proxmox-widget-toolkit: 4.0.9
pve-cluster: 8.0.4
pve-container: 5.0.4
pve-docs: 8.0.5
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.8-2
pve-ha-manager: 4.0.2
pve-i18n: 3.0.7
pve-qemu-kvm: 8.0.2-6
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.7
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.13-pve1
Node81: PVE 8.1, Linux 6.5.11-4-pve (2023-11-20T10:19Z)
proxmox-ve: 8.1.0 (running kernel: 6.5.11-4-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.0.9
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
proxmox-kernel-6.5: 6.5.11-4
proxmox-kernel-6.2.16-19-pve: 6.2.16-19
proxmox-kernel-6.2: 6.2.16-19
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: residual config
ksmtuned: 4.20150326+b1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.4
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-3
openvswitch-switch: 3.1.0-2
proxmox-backup-client: 3.0.4-1
proxmox-backup-file-restore: 3.0.4-1
proxmox-kernel-helper: 8.0.9
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.2
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.2
pve-qemu-kvm: 8.1.2-4
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.0-pve3
Both nodes CPU E5-2680 v4, mitigations=off, KSM enabled, load CPU/RAM <20%.
Successful migration from Node8 to Node81:
Win2019, CentOS 8, Debian 11.
Unsuccessful migration:
CentOS7, 3 VMs - 3-5 detected stalls on CPUs in console per 5 minutes, died after ~40 minutes after migration (qemu-guest dead, console dead on 1 VM, alive on 2 VMs, no network, no response to reboot/shutdown);
Ubuntu 20.04 - lots of detected stalls on CPUs in console, VM is ok, but feels wacky.
After hardreset or softreboot everything seems ok for more than 2 hours, no stalls in console.
All VMs CPUs are x86-64-v3, numa enabled, ballooning enabled on Linux VMs.
Had no problem with migration between PVE8/Kernel 6.2.15 nodes.
 
Last edited:
Just updated our 3 Node Test Cluster from 8.0.3. No Ceph, no ZFS, just good old NFS. Did some quick testing.
No Problems so far.
 
Attention to All Facing Nvidia Error During Upgrade:

If you're encountering an Nvidia error while upgrading, a simple solution is to install the latest Nvidia driver directly from their website. For many, including myself, using the version NVIDIA-Linux-x86_64-535.129.03 should resolved the issue effectively. Just follow these steps to ensure a smooth upgrade process:
  1. Download the Latest Driver: Visit the Nvidia website and download the latest driver. The version I used is NVIDIA-Linux-x86_64-535.129.03.
  2. Make the Driver Executable: Change the downloaded file's permissions to executable using the command:
    chmod +x NVIDIA-Linux-x86_64-535.129.03.run
  3. Stop and Disable X Server Applications: If you have any X Server applications like lightdm running, stop and disable them using:
    systemctl stop [service-name]
    systemctl disable [service-name]
    Replace [service-name] with the name of your specific X Server application.
  4. Execute the Installation with DKMS Flag: Install the driver using the command:
    ./NVIDIA-Linux-x86_64-535.129.03.run --dkms
  5. Upgrade to Version 8.1 and Reboot: After installing the driver, proceed with the upgrade to version 8.1 and then reboot your system.
  6. Re-enable X Server Applications: Once your system is back up, enable your X Server applications again.
Additional Note for CUDA in LXCs: If you're using CUDA in LXCs, make sure to install the same Nvidia driver there as well.
 
  • Like
Reactions: jsterr

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!