Proxmox VE 8.0 released!

Yes, the cryptsetup complaints are normal.

here's my complete term.log
The full log relating to the error is here
Code:
Setting up linux-headers-6.1.0-11-amd64 (6.1.38-4) ...^M$
/etc/kernel/header_postinst.d/dkms:^M$
dkms: running auto installation service for kernel 6.1.0-11-amd64.^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
Deprecated feature: REMAKE_INITRD (/etc/dkms/framework.conf)^M$
Sign command: /usr/lib/linux-kbuild-6.1/scripts/sign-file^M$
Signing key: /var/lib/dkms/mok.key^M$
Public certificate (MOK): /var/lib/dkms/mok.pub^M$
Certificate or key are missing, generating self signed certificate for MOK...^M$
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/sfc/4.15.14.1001/source/dkms.conf)^M$
^M$
Building module:^M$
Cleaning build area...^M$
'make' -C /var/lib/dkms/sfc/4.15.14.1001/build/linux_net KPATH=/lib/modules/6.1.0-11-amd64/build NDEBUG=1.....................(bad exit status: 2)^M$
Error! Bad return status for module build on kernel: 6.1.0-11-amd64 (x86_64)^M$
Consult /var/lib/dkms/sfc/4.15.14.1001/build/make.log for more information.^M$
Error! One or more modules failed to install during autoinstall.^M$
Refer to previous errors for more information.^M$
dkms: autoinstall for kernel: 6.1.0-11-amd64 failed!^M$
run-parts: /etc/kernel/header_postinst.d/dkms exited with return code 11^M$
Failed to process /etc/kernel/header_postinst.d at /var/lib/dpkg/info/linux-headers-6.1.0-11-amd64.postinst line 11.^M$
^[[1mdpkg:^[[0m error processing package linux-headers-6.1.0-11-amd64 (--configure):^M$
 installed linux-headers-6.1.0-11-amd64 package post-installation script subprocess returned error exit status 1^M$

The kernel 6.1.0-11-amd64 is not one provided by Proxmox. Did you install it manually at some point? Or was Proxmox VE installed on top of Debian? Then you should also remove the linux-image-amd64 package: https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye#Remove_the_Debian_Kernel
 
Last edited:
ok, seems i have manually installed sfc-dkms_4.15.14.1001-1_all.deb

previously, which is solarflare driver from their homepage. apt remove --purge sfc-dkms
seems to have resolved the issue i think..

EDIT: issue solved, it was manually install solarflare kernel module dkms, i had forgot about that. Thanks for your help @fiona

i'll delete some posts above to keep thread clean
 
Last edited:
I installed one more server with pve8. There is no issue with migration on it. I'll reinstall the first server and try again.
I'll create a new threads if the issue will be repeated again.
 
My issue was on kernel pve-kernel-6.2.16-3-pve and was gone on kernel proxmox-kernel-6.2.16-6-pve.
 
Last edited:
Pve kernel 6.x is the only option in Proxmox 8 and it has a bug with nested ESXi on AMD platform.
This is not a bug in the kernel and reverting that patch currently would be simply wrong and possibly introduce actual bugs (and also still break some newer VMWare tools like Workstation 17). After a discussion with another kernel developer, one of our devs will take a stab into adding (conveying) FLUSHBYASID support ourselves – but no promises yet.
 
@t.lamprecht

Is there any progress on sorting out the degradation in KSM and huge performance (CPU 100% spikes) loss in Windows guests with 6.2 kernels?
 
  • Like
Reactions: Sebi-S
Iam really waiting for a 6.3+ kernel for the much better AMD power management. This might be a big difference in my homelab :D
 
There is currently a bug in ZFS on Sapphire Rapids with the AMX features in the processor. It appears to be locking up the kernel frequently (even just ZFS boot). Add clearcpuid=600 to /etc/kernel/cmdline, do proxmox-boot-tool refresh, then reboot. A patch is in ZFS 2.2 which is currently in RC4.
 
We have a prototype here for a pure serial console install, but it was to late in the dev cycle for Proxmox VE 8.0 and the TUI installer is already helpful for many without that, but we will ship supporting plain serial consoles with a future release.
Hi, Thomas.

Ok, is there any possibility to use the "pure serial console" installation today without waiting for the next release?
 
Will proxmox finally support a cluster of 2 nodes and if one of them goes out, it will no longer be read-only? such behavior is absurd
 
Will proxmox finally support a cluster of 2 nodes and if one of them goes out, it will no longer be read-only? such behavior is absurd
No, we still actually care about providing data integrity and resilience about issues stemming from unsynchronized resource access due to a split-brain cluster, we certainly do not see that as being absurd, but rather a must-have standard feature for enterprise class hyper-visors.

You can use an external QDevice if your setup/budget/whatever doesn't allow you for using more than one node:
https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_corosync_external_vote_support
 
Ok, is there any possibility to use the "pure serial console" installation today without waiting for the next release?
Not easily via our official ISO, as the ISO's bootloader grub would need to be configured for that to be able to do all input/output via serial connection from the very start.

While you might be able to add the required changes and repack the ISO, it'd be way easier to install Proxmox VE on top of a plain Debian installation, see:
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm
 
No, we still actually care about providing data integrity and resilience about issues stemming from unsynchronized resource access due to a split-brain cluster, we certainly do not see that as being absurd, but rather a must-have standard feature for enterprise class hyper-visors.

You can use an external QDevice if your setup/budget/whatever doesn't allow you for using more than one node:
https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_corosync_external_vote_support
why are you unable to ensure data integrity on 2 nodes and e.g. vmware can?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!