Proxmox VE 8.1 released!

Got it! Thank you so much. However, if I upgrade from 8.0 to 8.1 do I still need to run these commands from this guide?
https://pve.proxmox.com/pve-docs/chapter-pvesdn.html#pvesdn_installation

SDN Core​

Code:
apt update
apt install libpve-network-perl

add
Code:
source /etc/network/interfaces.d/*
to
Code:
/etc/network/interfaces

DHCP IPAM​

Code:
apt update
apt install dnsmasq
# disable default instance
systemctl disable --now dnsmasq


Or if I do not plan to use SDN DHCP I should skip these steps? I just want to upgrade to latest versions and not having Proxmox to complain about something.


You don't need do nothing at all!
 
  • Like
Reactions: rigel.local
Side note1:
Try to migrate a VM with clipboard=vnc and get errot about vdagent!
After remove clibboard=vnc

Code:
2023-11-23 20:02:36 starting migration of VM 101 to node 'pve02' (172.19.100.20)
2023-11-23 20:02:36 starting VM 101 on remote node 'pve02'
2023-11-23 20:02:41 start remote tunnel
2023-11-23 20:02:44 ssh tunnel ver 1
2023-11-23 20:02:44 starting online/live migration on unix:/run/qemu-server/101.migrate
2023-11-23 20:02:44 set migration capabilities
2023-11-23 20:02:44 migration downtime limit: 100 ms
2023-11-23 20:02:44 migration cachesize: 128.0 MiB
2023-11-23 20:02:44 set migration parameters
2023-11-23 20:02:44 start migrate command to unix:/run/qemu-server/101.migrate
2023-11-23 20:02:44 migrate uri => unix:/run/qemu-server/101.migrate failed: VM 101 qmp command 'migrate' failed - The vdagent chardev doesn't yet support migration
2023-11-23 20:02:45 ERROR: online migrate failure - VM 101 qmp command 'migrate' failed - The vdagent chardev doesn't yet support migration
2023-11-23 20:02:45 aborting phase 2 - cleanup resources
2023-11-23 20:02:45 migrate_cancel
2023-11-23 20:02:50 ERROR: migration finished with problems (duration 00:00:15)
TASK ERROR: migration problems
 
Last edited:
  • Like
Reactions: omarkb93
Side note2
I'm still having trouble with the VNC screen the shrink from nothing.
If I am in the the console, and then click in other part of the web gui, and then returno to the console, it's shrinks as you can see in the screenshot:

1700781241322.png

After click in the other part of the VM configuration, like Hardware and then click back in Console, then it get the right size again.
Very weird.
 
Last edited:
May you tell me, how doing this (I did not blacklisted, but installed via dkms)
apt remove r8168-dkms then you should be able to go to /etc/modprobe.d/File name.conf
File name should be the name of the driver (or DKMS) your using, just rename it from .conf to something like .conf.backup and then restart the upgrade process (or remove and reinstall the 6.5 kernel if it didn't install the first time).
 
  • Like
Reactions: MikeHotel
no, we cherry-picked the patch disabling the broken feature by default:

https://git.proxmox.com/?p=zfsonlinux.git;a=commit;h=96c807af63f70dc930328e5801659a5bd40e6d47

edit: it seems there might be one more (pre-existing!) issue that is just more easy to trigger with block cloning enabled - if you want to be extra careful, you might want to set the tunable mentioned in the linked comment to be on the safe side.
 
Last edited:
Got it! Thank you so much. However, if I upgrade from 8.0 to 8.1 do I still need to run these commands from this guide?
No, you do not need to run them. Our dependency definition always tries to ensure a valid state, if those packages would be a hard requirement then we would have pulled them in as direct dependency.

Or if I do not plan to use SDN DHCP I should skip these steps? I just want to upgrade to latest versions and not having Proxmox to complain about something.
Yes, you only need to install those extra packages if you actually want to use that functionality, if you don't then just skip it, Proxmox VE won't complain about them missing.
 
Congratulations on the release!

Looks like a lot of new stuff went into this one. I imagine ZFS upgrades are always an ... exciting experience.

Will the SDN feature become a mandatory part of PVE's networking setup at some point? I'm just learning VLANs and just got my head around managing them in the Node --> Network interface. :)
 
Hi,
/dev/hwrng for AMD Ryzen seems to be broken again. If try to spin up one of my VMs with rng0: source=/dev/hwrng, I get:

TASK ERROR: Cannot start VM with passed-through RNG device: '/dev/hwrng' exists, but '/sys/devices/virtual/misc/hw_random/rng_current' is set to 'none'. Ensure that a compatible hardware-RNG is attached to the host.

I did flag this up as a problem in Proxmox 8.0.3 but it mysteriously got fixed after a kernel update. Now it's back again. See here for original post.

Obviously I can just use rng0: source=/dev/urandom but I wonder if this can be fixed again? My BIOS has the latest AGESA ComboAm4v2PI 1.2.0.A. update. Would it make any difference if I bought a hardware TPM2 chip?

Any help much appreciated.
from a quick search, seems to be intentional: https://git.kernel.org/pub/scm/linu.../?id=554b841d470338a3b1d6335b14ee1cd0c8f5d754

I'd suggest to just use another RNG source
 
apt remove r8168-dkms then you should be able to go to /etc/modprobe.d/File name.conf
File name should be the name of the driver (or DKMS) your using, just rename it from .conf to something like .conf.backup and then restart the upgrade process (or remove and reinstall the 6.5 kernel if it didn't install the first time).
After doing that, pve is booting, but the NIC is not installed.

Update:

I think, i figured it out. For all of you, going down the same rabbit hole.

You have to reinstall the r8168 drivers.
See this!
 
Last edited:
After the upgrade 8.0-->8.1 with the new kernel VMs become unresponsive short after a live migration and need to be resettet.

Affected VMs so far all have the old CPU-Default KVM64. They are running Debian 10-12 with the respective default kernel.
There are no messages in dmesg/konsole when this happens. New connections to this VM aren't possible.
If I'm connected via ssh to such a VM the connection persists I see a jump in the load there. Top doesn't work anymore only uptime shows this.

All so far tested proxmox hosts have Xeon(R) CPU E5-2690 v4.

The problem doesn't only appear when I migrate from an 8.0 Host to a 8.1 host but even when I migrate between two already upgraded 8.1 hosts.

VM storage is on CEPH.

Haven't tried to boot a 8.1 proxmox with the old 6.2.16-19-pve kernel so far.

After more testing it looks like this problem https://forum.proxmox.com/threads/p...11-4-rcu_sched-stall-cpu.136992/#post-608833/

Found the same RCU messages on some VMs.

Will try old kernel next.
 
Last edited:
To be able to update the kernel to 6.5.11-4-pve, WITH R8168 kernel module working, you need to build the kernel module for the new version of the kernel. The module was previously build for the kernel version you were running.

To fix it you'll need to get the kernel headers for the new version:

Code:
apt-get install linux-headers-6.5.11-4-pve

Second, you'll need to fix the kernel module source code, as Realtek did not really update the source code so it builds correctly for kernel 6.5. Luckily, fixing it is simple. Check my notes in the end for a technical explanation for the issue.

Edit the following file with your favourite editor:

Code:
nano /usr/src/r8168-8.051.02/r8168_n.c

Then add the following line to the beginning of it:
Code:
#if LINUX_VERSION_CODE >= KERNEL_VERSION(6,4,10)
#include <net/gso.h>
#endif

My suggestion is, you should add it in between the 2 lines below:

Code:
#include <linux/netdevice.h>
#include <linux/etherdevice.h>

So it looks like this:

Code:
#include <linux/netdevice.h>
#if LINUX_VERSION_CODE >= KERNEL_VERSION(6,4,10)
#include <net/gso.h>
#endif
#include <linux/etherdevice.h>

Then you can try building the kernel module:
Code:
dkms build -m r8168 -v 8.051.02 -k 6.5.11-4-pve

You can check if anything went wrong this way:
Code:
cat /var/lib/dkms/r8168/8.051.02/build/make.log

You MIGHT need to install grub-efi-amd64 if you have related errors while building the module. If so, do this:

Code:
apt install grub-efi-amd64

Then retry building the kernel module.

After that you should be able to upgrade the kernel to 6.5.11-4 and also upgrade Proxmox to 8.1:

Code:
apt upgrade


ADDENDUM - What's wrong with the kernel module source code?

The skb_gso_segment C function definition was moved from the netdevice.h header file to net/gso.h in kernel 6.4.10 (if I'm not wrong) [1].

Hence the error when building the kernel module. The function definition couldn't be found.
The fix is just a matter of adding the net/gso.h header include.
However that's only necessary for kernel 6.4.10 forward. Hence the
Code:
#if LINUX_VERSION_CODE >= KERNEL_VERSION(6,4,10)
conditional statement and its matching
Code:
$endif
statement around the include.

References:
[1] - https://lore.kernel.org/netfilter-devel/1367259744-8922-16-git-send-email-pablo@netfilter.org/
 
I had the same issue since I had to install a DKMS module for my realtek NIC in earlier versions of proxmox, I removed the R8168 blacklist and just let it use the kernel default driver and its been working on the 6.5 Kernel with no issues.
Hello
I need some more help with this.
I also had to install the r8168-dkms driver package for my NIC before for pve 8.
If went fine with previous kernel updates before.
Now with the update to pve 8.1 my network connection does not work any more.

I manage to get into the recovery mode console from GRUB with the new kernel 6.5

The module r8168 is not found during startup.

How do I now remove the r8168 driver and use the built-in driver r8169, that shall work again according to previous post?
Or how do I remove the blacklist for 8169?
I don't have network / internet connection.

I find entries in /etc/modprobe.d/r8168-dkms.conf

alias pci:v000*xyz* r8168 alias pci:v000*xyz* r8168

Do I just uncomment these lines?
Do I need to uninstall the r8168-dkms from apt?
Do I need to rebuild the kernel then?
Can someone help me with the steps in the shell?

Many Thanks
Christian
 
Hello
I need some more help with this.
I also had to install the r8168-dkms driver package for my NIC before for pve 8.
If went fine with previous kernel updates before.
Now with the update to pve 8.1 my network connection does not work any more.

I manage to get into the recovery mode console from GRUB with the new kernel 6.5

The module r8168 is not found during startup.

How do I now remove the r8168 driver and use the built-in driver r8169, that shall work again according to previous post?
Or how do I remove the blacklist for 8169?
I don't have network / internet connection.

I find entries in /etc/modprobe.d/r8168-dkms.conf

alias pci:v000*xyz* r8168 alias pci:v000*xyz* r8168

Do I just uncomment these lines?
Do I need to uninstall the r8168-dkms from apt?
Do I need to rebuild the kernel then?
Can someone help me with the steps in the shell?

Many Thanks
Christian
Rename that file to something like R8168-DKMS.conf.backup, then reboot. That should let the R8169 driver load. Once it boots run lspci -v to check.
 
Rename that file to something like R8168-DKMS.conf.backup, then reboot. That should let the R8169 driver load. Once it boots run lspci -v to check.
Thanks for the swift reply, that did it
lspci -v now gives:
Kernel driver in use: r8169
Kernel modules: r8169
 
Be careful if you upgrade to 8.1 and you have any VM with PVSCSI as a SCSI controller, looks like you can't boot on some OS (I had problem with win2016 and rockylinux 8, NO problem with rockylinux 9 and debian 12)

https://forum.proxmox.com/threads/upgrade-to-8-1-cant-boot-my-win-2016-vm-pvscsi-anymore.137009/

Now i solved but I was forced to change SCSI controller to Virtio (which is not bad anyway)
It should not depend on the guest OS, but whether OVMF is used and the disk is attached as SCSI.

A package with a fix, i.e. pve-edk2-firmware=4.2023.08-2, is currently available on the pvetest repository.
 
Hi,
After the upgrade 8.0-->8.1 with the new kernel VMs become unresponsive short after a live migration and need to be resettet.

Affected VMs so far all have the old CPU-Default KVM64.
what other CPU types do you have that were not affected?

Could you please answer there instead of here? Then it will be easier to follow the discussion and we don't clutter the release thread too much. Please also share a configuration of an affected VM, qm config <ID>.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!