Proxmox VE 8.0 released!

Hey all,

the upgrade from 7 to 8 went smoothly, kudos to the proxmox team! Now I'm updating my lxc containers and I've noticed that the debian 12 lxcs will show all available cores from the host in the "htop" utility (and some are shown as "offline") even though I have only assigned one core to them. With debian 11 the cpu core count shows correctly in htop. I assume this is a htop issue but I'm curious if any one else noticed this.
 
HI,

First of all I want to congratulation the all Proxmox team, for their good job.

I was able to upgrade from 7-to-8, with some problems. On 2 different nodes(2 of 7), I get many messages in the logs, like this:

Jul 09 11:08:51 pve7 qmeventd[2596]: could not get vmid from pid 288500 Jul 09 11:08:52 pve7 qmeventd[2596]: unexpected cgroup entry 13:blkio:/qemu.slice Jul 09 11:08:52 pve7 qmeventd[2596]: could not get vmid from pid 295745 Jul 09 11:08:56 pve7 qmeventd[2596]: unexpected cgroup entry 13:blkio:/qemu.slice Jul 09 11:08:56 pve7 qmeventd[2596]: could not get vmid from pid 288500 Jul 09 11:08:57 pve7 qmeventd[2596]: unexpected cgroup entry 13:blkio:/qemu.slice Jul 09 11:08:57 pve7 qmeventd[2596]: could not get vmid from pid 295745 Jul 09 11:09:01 pve7 qmeventd[2596]: unexpected cgroup entry 13:blkio:/qemu.slice Jul 09 11:09:01 pve7 qmeventd[2596]: could not get vmid from pid 288500 Jul 09 11:09:02 pve7 qmeventd[2596]: unexpected cgroup entry 13:blkio:/qemu.slice Jul 09 11:09:02 pve7 qmeventd[2596]: could not get vmid from pid 295745 Jul 09 11:09:06 pve7 qmeventd[2596]: unexpected cgroup entry 13:blkio:/qemu.slice Jul 09 11:09:06 pve7 qmeventd[2596]: could not get vmid from pid 288500

And a ps show this:

ps aux|grep 288500 root 288500 3.8 4.7 5460500 3071604 ? Sl 10:42 2:07 /usr/bin/kvm -id 20200105 -name mssql.xxxx.yyy,debug-threads=on -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/20200105.qmp,server=on,wait=off -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon chardev=qmp-event,mode=control -pidfile /var/run/qemu-server/20200105.pid -daemonize -smbios type=1,uuid=32d3ae4d-7d93-4536-8318-df811a7f5b51 -smp 3,sockets=1,cores=3,maxcpus=3 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vnc unix:/var/run/qemu-server/20200105.vnc,password=on -cpu host,+aes,+kvm_pv_eoi,+kvm_pv_unhalt -m 3072 -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device vmgenid,guid=611143e0-3cf9-4370-be45-40f7c4d9997e -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device VGA,id=vga,bus=pci.0,addr=0x2 -chardev socket,path=/var/run/qemu-server/20200105.qga,server=on,wait=off,id=qga0 -device virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on -iscsi initiator-name=iqn.1993-08.org.debian:01:f075904da36 -drive if=none,id=drive-ide2,media=cdrom,aio=io_uring -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -device virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5 -drive file=/dev/zvol/rpool/data/vm-20200105-disk-0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap -device scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100 -netdev type=tap,id=net0,ifname=tap20200105i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=7E:5C:EE:12:56:4D,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=1024 -machine type=pc+pve0

So VMID is correct: 20200105


The same for 295745

ps aux|grep 295745 root 295745 57.2 12.7 9767876 8342200 ? Sl 10:44 35:38 /usr/bin/kvm -id 202005191 -name geam10x65b,debug-threads=on -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/202005191.qmp,server=on,wait=off -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon chardev=qmp-event,mode=control -pidfile /var/run/qemu-server/202005191.pid -daemonize -smbios type=1,uuid=7613b2bc-a6e8-44a4-af62-7de5f50378fb -smp 4,sockets=1,cores=4,maxcpus=4 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vnc unix:/var/run/qemu-server/202005191.vnc,password=on -cpu host,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt -m 8096 -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device vmgenid,guid=049e9f65-9dda-4a85-8216-c5fdfc58390b -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -readconfig /usr/share/qemu-server/pve-usb.cfg -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device usb-host,vendorid=0x0781,productid=0x5591,id=usb0 -chardev socket,id=tpmchar,path=/var/run/qemu-server/202005191.swtpm -tpmdev emulator,id=tpmdev,chardev=tpmchar -device tpm-tis,tpmdev=tpmdev -device VGA,id=vga,bus=pci.0,addr=0x2,edid=off -chardev socket,path=/var/run/qemu-server/202005191.qga,server=on,wait=off,id=qga0 -device virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:f075904da36 -drive if=none,id=drive-ide0,media=cdrom,aio=io_uring -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=101 -device virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5 -drive file=/dev/zvol/rpool/data/vm-202005191-disk-0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap -device scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100 -drive file=/dev/zvol/rpool/VM64K/vm-202005191-disk-0,if=none,id=drive-scsi1,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap -device scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1 -netdev type=tap,id=net0,ifname=tap202005191i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=8E:3F:D2:3D:E4:B3,netdev=net0,bus=pci.0,addr=0x12,id=net0 -netdev type=tap,id=net1,ifname=tap202005191i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=12:E2:8E:71:23:CC,netdev=net1,bus=pci.0,addr=0x13,id=net1 -netdev type=tap,id=net2,ifname=tap202005191i2,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown -device e1000,mac=CE:47:AD:29:D0:50,netdev=net2,bus=pci.0,addr=0x14,id=net2 -netdev type=tap,id=net3,ifname=tap202005191i3,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=56:D0:D1:9F:78:FA,netdev=net3,bus=pci.0,addr=0x15,id=net3 -rtc driftfix=slew,base=localtime -machine hpet=off,type=pc-i440fx-5.1+pve0 -global kvm-pit.lost_tick_policy=discard


qm list VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID 20200105 mssql.xxxx.yyy running 3072 32.00 288500 202005191 geam10x65b running 8096 108.00 295745

So also qm show the correct VMID.


The problem is that when I stop any of this 2 VM(or both of them) from web-interface, this it not happend. In the web-interface of ProxMox, it show that are STOP, but in reality, even after 10-15 min, they are not. If I try to start from web-interface, I see this(after I stop both VMs at the same time):

trying to acquire lock...
TASK ERROR: can't lock file '/var/lock/qemu-server/lock-202005191.conf' - got timeout

20200105 mssql.xxx.yyy running 3072 32.00 288500 202005191 geam10x65b running 8096 108.00 295745

Jul 09 12:01:00 pve7 pvedaemon[480544]: VM 20200105 qmp command failed - VM 20200105 qmp command 'guest-ping' failed - unable to connect to VM 20200105 qga socket - timeout after 31 retries

The only way to start this VM, is to reboot the node(it takes a lot of time)


UPDATE: after several reboots of this 2 nodes, now the problem is vanishes.....
I am afraid to do another reboot now ;) Tomorrow after working hours I will try another reboot, to see if problem is persistent!



Thx. in advance.

Good luck / Bafta !
 
Last edited:
Hi,
Code:
Jul 09 11:08:51 pve7 qmeventd[2596]: could not get vmid from pid 288500
Jul 09 11:08:52 pve7 qmeventd[2596]: unexpected cgroup entry 13:blkio:/qemu.slice
do you have anything that modifies the cgroups on the system? Did you do any special operations with the VMs? If the issue happens again, please post the output of cat /proc/<PID>/cgroup with the PID from the error message and the output of cat /proc/mounts | grep cgroup.

EDIT: Are you maybe running a hybrid cgroup system? I was able to reproduce a similar issue (after unmounting a certain legacy cgroup) and now sent patches that should fix the issue: https://lists.proxmox.com/pipermail/pve-devel/2023-July/058049.html

EDIT 2: seems like the order of legacy cgroup entries is not the same on every boot, so that explains why rebooting fixed it. Best to wait for the patches to be applied before rebooting systems with legacy cgroup support enabled.
 
Last edited:
  • Like
Reactions: mow and guletz
Hi again ;)

After upgrade(without any problem) to ver. 8, on a CT with debian 11.7(+ PBS inside, Linux pbs 6.2.16-3-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-3 (2023-06-17T05:58Z) x86_64), and no "legacy cgroup"
- the CT did not start, and I see this error:

run_buffer: 322 Script exited with status 255 lxc_init: 844 Failed to run lxc.hook.pre-start for container "20230601" __lxc_start: 2027 Failed to initialize container "20230601" TASK ERROR: startup for container '20230601' failed


.... and another relevant error:

2023-07-10T17:27:32.329315+03:00 pmx1 systemd[1]: Started pve-container@20230601.service - PVE LXC Container: 20230601. 2023-07-10T17:27:32.748595+03:00 pmx1 kernel: [ 155.566797] loop0: detected capacity change from 0 to 33554432 2023-07-10T17:27:32.900588+03:00 pmx1 kernel: [ 155.718207] ext4: Unknown parameter 'noacl' 2023-07-10T17:27:32.946502+03:00 pmx1 pvedaemon[3670]: startup for container '20230601' failed

[SOLUTION]:

Change ACLs option for any vHDD from noacl to "Default"

How lucky I am ;)

Good luck / Bafta !
 
Last edited:
After going from 7.4 to 8 my server began going from GRUB to a kernel panic.

I am on older second-hand enterprise hardware. Not sure where I should look for the problem. I was able to boot back into kernel 5.15.108 just fine. Any ideas?

Edit: and looking at apt dist-upgrade I see
Code:
dkms: autoinstall for kernel: 6.2.16-3-pve failed!
run-parts: /etc/kernel/postinst.d/dkms exited with return code 11
Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/pve-kernel-6.2.16-3-pve.postinst line 20.
dpkg: error processing package pve-kernel-6.2.16-3-pve (--configure):
 installed pve-kernel-6.2.16-3-pve package post-installation script subprocess returned error exit status 2
dpkg: dependency problems prevent configuration of pve-kernel-6.2:
 pve-kernel-6.2 depends on pve-kernel-6.2.16-3-pve; however:
  Package pve-kernel-6.2.16-3-pve is not configured yet.

dpkg: error processing package pve-kernel-6.2 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of proxmox-ve:
 proxmox-ve depends on pve-kernel-6.2; however:
  Package pve-kernel-6.2 is not configured yet.

dpkg: error processing package proxmox-ve (--configure):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 pve-kernel-6.2.16-3-pve
 pve-kernel-6.2
 proxmox-ve
E: Sub-process /usr/bin/dpkg returned an error code (1)

Also:
Code:
[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]


For now it seems that things are working as long as I can make sure I boot into the old kernel.
 
Last edited:
Yes, I am running hybrid option for kernel.
FYI, Proxmox VE 8 (and for that matter Debian 12 Bookworm) will be the last major release supporting the hybrid cgroup option, so please ensure to migrate your CTs to newer distro versions (released in 2018 or later, see the docs) over the next three years (i.e., until EOL of Proxmox VE 8 in 2026, start of Q3) or switch them to VMs.
 
dkms: autoinstall for kernel: 6.2.16-3-pve failed!
You got a dkms module installed, and rebuilding that for the new kernel fails – did you install the pve-headers meta-meta-package that pulls in the current meta-package (as of now for Proxmox VE 8 that would be pve-headers-6.2), which pulls in the actual kernel header package of the newest 6.2 kernel?

If, then please check higher up for more output; in any way, the dkms module failing to rebuild fails to install the new kernel cleanly, which could explain boot issues.
 
Hello, we have a cluster with several servers running version 7.4, and we have already updated a couple of them to version 8. When we move machines from a server with version 7.4 to one with version 8.0, the machines experience kernel panic. This happens with different operating systems and their versions. However, if we move the same machines between servers running version 8, this issue does not occur. Has anyone else experienced this issue?

BR
 
Hello, we have a cluster with several servers running version 7.4, and we have already updated a couple of them to version 8. When we move machines from a server with version 7.4 to one with version 8.0, the machines experience kernel panic. This happens with different operating systems and their versions. However, if we move the same machines between servers running version 8, this issue does not occur. Has anyone else experienced this issue?

BR
What CPUs are in use between the different servers? IIRC, VM migrations moving away from 5.15 based kernel where more sensible in difference between CPU models due to an issue in masking features flags – also, what are the VM configurations of some of those in question?

Probably a good idea to answer this in a new thread, to avoid crowding the general release one.
 
You got a dkms module installed, and rebuilding that for the new kernel fails – did you install the pve-headers meta-meta-package that pulls in the current meta-package (as of now for Proxmox VE 8 that would be pve-headers-6.2), which pulls in the actual kernel header package of the newest 6.2 kernel?

If, then please check higher up for more output; in any way, the dkms module failing to rebuild fails to install the new kernel cleanly, which could explain boot issues.

Thanks for the reply. I ran apt install pve-headers to see what I would get.

Code:
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
pve-headers is already the newest version (8.0.1).
The following packages were automatically installed and are no longer required:
  dctrl-tools g++-10 libatk1.0-data libboost-context1.74.0 libboost-coroutine1.74.0 libboost-iostreams1.74.0
  libboost-program-options1.74.0 libboost-thread1.74.0 libbpf0 libcbor0 libdns-export1110 libgdk-pixbuf-xlib-2.0-0
  libgdk-pixbuf2.0-0 libicu67 libisc-export1105 libleveldb1d liblua5.2-0 libmpdec3 libopts25 libperl5.32 libprocps8
  libprotobuf23 libpython3.11 libpython3.9 libpython3.9-minimal libpython3.9-stdlib libsigsegv2 libstdc++-10-dev libtiff5
  liburing1 libwebp6 perl-modules-5.32 pve-headers-5.15.107-2-pve pve-kernel-5.15.107-1-pve python3-ldb python3-talloc
  python3.9 python3.9-minimal telnet
Use 'apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded.
3 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Setting up pve-kernel-6.2.16-3-pve (6.2.16-3) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/dkms 6.2.16-3-pve /boot/vmlinuz-6.2.16-3-pve
dkms: running auto installation service for kernel 6.2.16-3-pve.
Sign command: /lib/modules/6.2.16-3-pve/build/scripts/sign-file
Signing key: /var/lib/dkms/mok.key
Public certificate (MOK): /var/lib/dkms/mok.pub
Error! The /var/lib/dkms/wireguard/1.0.20210219/6.2.16-3-pve/x86_64/dkms.conf for module wireguard includes a BUILD_EXCLUSIVE directive which does not match this kernel/arch/config.
This indicates that it should not be built.
Error! One or more modules failed to install during autoinstall.
Refer to previous errors for more information.
dkms: autoinstall for kernel: 6.2.16-3-pve failed!
run-parts: /etc/kernel/postinst.d/dkms exited with return code 11
Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/pve-kernel-6.2.16-3-pve.postinst line 20.
dpkg: error processing package pve-kernel-6.2.16-3-pve (--configure):
 installed pve-kernel-6.2.16-3-pve package post-installation script subprocess returned error exit status 2
dpkg: dependency problems prevent configuration of pve-kernel-6.2:
 pve-kernel-6.2 depends on pve-kernel-6.2.16-3-pve; however:
  Package pve-kernel-6.2.16-3-pve is not configured yet.

dpkg: error processing package pve-kernel-6.2 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of proxmox-ve:
 proxmox-ve depends on pve-kernel-6.2; however:
  Package pve-kernel-6.2 is not configured yet.

dpkg: error processing package proxmox-ve (--configure):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 pve-kernel-6.2.16-3-pve
 pve-kernel-6.2
 proxmox-ve
E: Sub-process /usr/bin/dpkg returned an error code (1)

Should I start a new thread?

Thanks for all you guys do. Proxmox is awesome.

Edit:
I got the kernel installed. I set the repositories back to Bullseye and used apt to remove the pve-headers-6.x and any kernel thing 6.2. I then set the repositories back to Bookworm and ran apt update and the dist upgrade. pve7to8 then reported that the 6.2 kernel is installed, so I rebooted into that. Now my only issue seems to be that pvescheduler.service isn't running.


Another edit: after some time pvescheduler.service seems to have started after running a manual start. Looks good now!
 
Last edited:
nvidia vgpu 16.0 works fine in pve 8 - https://gitlab.com/polloloco/vgpu-proxmox
This is doing binary patches on the NVIDIA driver though, as they really don't support Kernel 6.0 or newer by default.

Binary patching is really nothing I'd classify as stable or working fine (out of the box), it's rather experimental and one should take caution with such steps, especially as the linked readme recommends pulling, building and running lots of third-party code as root on the PVE host (i.e., they can do whatever, including host take over).

If you're fine with those risks, and it works for you – then great, but this isn't something I'd just recommend without at least a disclaimer that it's doing some rather "funky stuff" so that others are also aware that it isn't exactly risk-free.
 
This is doing binary patches on the NVIDIA driver though, as they really don't support Kernel 6.0 or newer by default.

Binary patching is really nothing I'd classify as stable or working fine (out of the box), it's rather experimental and one should take caution with such steps, especially as the linked readme recommends pulling, building and running lots of third-party code as root on the PVE host (i.e., they can do whatever, including host take over).

If you're fine with those risks, and it works for you – then great, but this isn't something I'd just recommend without at least a disclaimer that it's doing some rather "funky stuff" so that others are also aware that it isn't exactly risk-free.
Understood the risk, but I have a Nvidia P4 and this is no need to patch the host driver at all, just using the out of box driver.
I don't know nvidia driver support kernel 6+ or not, but the 16.0 vGPU driver is working fine in pve 8.
 
Last edited:
This is doing binary patches on the NVIDIA driver though, as they really don't support Kernel 6.0 or newer by default.

Binary patching is really nothing I'd classify as stable or working fine (out of the box), it's rather experimental and one should take caution with such steps, especially as the linked readme recommends pulling, building and running lots of third-party code as root on the PVE host (i.e., they can do whatever, including host take over).

If you're fine with those risks, and it works for you – then great, but this isn't something I'd just recommend without at least a disclaimer that it's doing some rather "funky stuff" so that others are also aware that it isn't exactly risk-free.
Nope, can confirm that the original vGPU 16.0 directly from Nvidia and unpatched work with Proxmox 8 and the 6.2 Kernel. Tested with a Tesla P100.

@proxmox Team: Is it possible to add an official Kernel 5.15 PVE package to PVE 8.0?
With Nvidia vGPU 16.0 these censored at Nvidia have removed support for vCS functionality/licensing. You can now only get vCS through their "NVIDIA AI Enterprise" license. For reference: vCS cost ~$430 annually, per GPU. NVIDIA AI Enterprise costs $4500 annually per GPU (at the minimum). I'm sure larger businesses can swallow licensing costs just 10x-ing over night, but small businesses are looking mighty dumb right now, so some way to keep vGPU 15.3 alive would be a godsent.
 
Ran into an issue. I haven't rebooted yet as this seems fairly major...you know, broken mismatched kernel stuff:

Code:
Preparing to unpack .../00-dkms_3.0.10-8_all.deb ...
Unpacking dkms (3.0.10-8) over (2.8.4-3) ...
dpkg: warning: unable to delete old directory '/etc/dkms/template-dkms-mkdeb/debian': Directory not empty
dpkg: warning: unable to delete old directory '/etc/dkms/template-dkms-mkdeb': Directory not empty
dpkg: warning: unable to delete old directory '/etc/dkms/template-dkms-mkbmdeb/debian': Directory not empty
dpkg: warning: unable to delete old directory '/etc/dkms/template-dkms-mkbmdeb': Directory not empty


Setting up pve-kernel-6.2.16-4-pve (6.2.16-4) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/dkms 6.2.16-4-pve /boot/vmlinuz-6.2.16-4-pve
dkms: running auto installation service for kernel 6.2.16-4-pve.
Sign command: /lib/modules/6.2.16-4-pve/build/scripts/sign-file
Signing key: /var/lib/dkms/mok.key
Public certificate (MOK): /var/lib/dkms/mok.pub
Certificate or key are missing, generating self signed certificate for MOK...

Building module:
Cleaning build area...(bad exit status: 2)
make -j32 KERNELRELEASE=6.2.16-4-pve all KPVER=6.2.16-4-pve...(bad exit status: 2)
Error! Bad return status for module build on kernel: 6.2.16-4-pve (x86_64)
Consult /var/lib/dkms/kernel-mft-dkms/4.17.0/build/make.log for more information.
Error! One or more modules failed to install during autoinstall.
Refer to previous errors for more information.
dkms: autoinstall for kernel: 6.2.16-4-pve failed!
run-parts: /etc/kernel/postinst.d/dkms exited with return code 11
Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/pve-kernel-6.2.16-4-pve.postinst line 20.
dpkg: error processing package pve-kernel-6.2.16-4-pve (--configure):
 installed pve-kernel-6.2.16-4-pve package post-installation script subprocess returned error exit status 2


dpkg: dependency problems prevent configuration of pve-kernel-6.2:
 pve-kernel-6.2 depends on pve-kernel-6.2.16-4-pve; however:
  Package pve-kernel-6.2.16-4-pve is not configured yet.

dpkg: error processing package pve-kernel-6.2 (--configure):
 dependency problems - leaving unconfigured


dpkg: dependency problems prevent configuration of proxmox-ve:
 proxmox-ve depends on pve-kernel-6.2; however:
  Package pve-kernel-6.2 is not configured yet.

dpkg: error processing package proxmox-ve (--configure):
 dependency problems - leaving unconfigured

Errors were encountered while processing:
 pve-kernel-6.2.16-4-pve
 pve-kernel-6.2
 proxmox-ve
E: Sub-process /usr/bin/dpkg returned an error code (1)

Code:
# pve7to8 --full
= CHECKING VERSION INFORMATION FOR PVE PACKAGES =

Checking for package updates..
PASS: all packages up-to-date

Checking proxmox-ve package version..
PASS: already upgraded to Proxmox VE 8

Checking running kernel version..
WARN: unexpected running and installed kernel '5.15.108-1-pve'.

= CHECKING CLUSTER HEALTH/SETTINGS =

PASS: systemd unit 'pve-cluster.service' is in state 'active'
PASS: systemd unit 'corosync.service' is in state 'active'
PASS: Cluster Filesystem is quorate.

Analzying quorum settings and state..
INFO: configured votes - nodes: 4
INFO: configured votes - qdevice: 0
INFO: current expected votes: 4
INFO: current total votes: 4

Checking nodelist entries..
PASS: nodelist settings OK

Checking totem settings..
PASS: totem settings OK

INFO: run 'pvecm status' to get detailed cluster status..

= CHECKING HYPER-CONVERGED CEPH STATUS =

INFO: hyper-converged ceph setup detected!
INFO: getting Ceph status/health information..
PASS: Ceph health reported as 'HEALTH_OK'.
INFO: checking local Ceph version..
PASS: found expected Ceph 17 Quincy release.
INFO: getting Ceph daemon versions..
PASS: single running version detected for daemon type monitor.
PASS: single running version detected for daemon type manager.
PASS: single running version detected for daemon type MDS.
PASS: single running version detected for daemon type OSD.
INFO: different builds of same version detected for an OSD. Are you in the middle of the upgrade?
WARN: 'noout' flag not set - recommended to prevent rebalancing during upgrades.
INFO: checking Ceph config..

= CHECKING CONFIGURED STORAGES =

PASS: storage 'Ceph-RBDStor' enabled and active.
PASS: storage 'FreeNAS-ProxBackup' enabled and active.
PASS: storage 'Snippets' enabled and active.
PASS: storage 'cephfs' enabled and active.
PASS: storage 'local' enabled and active.
PASS: storage 'local-lvm' enabled and active.
PASS: storage 'pve1-bkup1-ds1' enabled and active.
INFO: Checking storage content type configuration..
PASS: no storage content problems found
PASS: no storage re-uses a directory for multiple content types.

= MISCELLANEOUS CHECKS =

INFO: Checking common daemon services..
PASS: systemd unit 'pveproxy.service' is in state 'active'
PASS: systemd unit 'pvedaemon.service' is in state 'active'
PASS: systemd unit 'pvescheduler.service' is in state 'active'
PASS: systemd unit 'pvestatd.service' is in state 'active'
INFO: Checking for supported & active NTP service..
PASS: Detected active time synchronisation unit 'chrony.service'
INFO: Checking for running guests..
PASS: no running guest detected.
INFO: Checking if the local node's hostname 'pve1-cpu1' is resolvable..
INFO: Checking if resolved IP is configured on local node..
PASS: Resolved node IP '10.2.2.81' configured and active on single interface.
INFO: Check node certificate's RSA key size
PASS: Certificate 'pve-root-ca.pem' passed Debian Busters (and newer) security level for TLS connections (4096 >= 2048)
PASS: Certificate 'pve-ssl.pem' passed Debian Busters (and newer) security level for TLS connections (2048 >= 2048)
PASS: Certificate 'pveproxy-ssl.pem' passed Debian Busters (and newer) security level for TLS connections (2048 >= 2048)
INFO: Checking backup retention settings..
PASS: no backup retention problems found.
INFO: checking CIFS credential location..
PASS: no CIFS credentials at outdated location found.
INFO: Checking permission system changes..
INFO: Checking custom role IDs for clashes with new 'PVE' namespace..
PASS: none of the 1 custom roles will clash with newly enforced 'PVE' namespace
INFO: Checking if LXCFS is running with FUSE3 library, if already upgraded..
WARN: systems seems to be upgraded but LXCFS is still running with FUSE 2 library, not yet rebooted?
INFO: Checking node and guest description/note length..
PASS: All node config descriptions fit in the new limit of 64 KiB
PASS: All guest config descriptions fit in the new limit of 8 KiB
INFO: Checking container configs for deprecated lxc.cgroup entries
PASS: No legacy 'lxc.cgroup' keys found.
INFO: Checking if the suite for the Debian security repository is correct..
INFO: Checking for existence of NVIDIA vGPU Manager..
PASS: No NVIDIA vGPU Service found.
INFO: Checking bootloader configuration...
SKIP: proxmox-boot-tool not used for bootloader configuration
SKIP: No containers on node detected.

= SUMMARY =

TOTAL:    44
PASSED:   39
SKIPPED:  2
WARNINGS: 3
FAILURES: 0

ATTENTION: Please check the output for detailed information!

Code:
#apt update
Hit:1 http://ftp.ca.debian.org/debian bookworm InRelease
Hit:2 http://security.debian.org bookworm-security InRelease
Hit:3 http://ftp.ca.debian.org/debian bookworm-updates InRelease
Hit:4 http://download.proxmox.com/debian/pve bookworm InRelease
Hit:5 http://download.proxmox.com/debian/ceph-quincy bookworm InRelease
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
All packages are up to date.

Those are just the errory pieces from the install log file that I could see. Any assistance here would be very appreciated!

Thought I would include the build log:

Code:
# cat /var/lib/dkms/kernel-mft-dkms/4.17.0/build/make.log

DKMS make.log for kernel-mft-dkms-4.17.0 for kernel 6.2.16-4-pve (x86_64)
Wed Jul 12 11:28:59 AM MDT 2023
/bin/sh: 1: Syntax error: Unterminated quoted string
/bin/sh: 1: [: -lt: unexpected operator
make -C /lib/modules/6.2.16-4-pve/build M=/var/lib/dkms/kernel-mft-dkms/4.17.0/build CONFIG_CTF= CONFIG_CC_STACKPROTECTOR_STRONG=  modules
make[1]: warning: jobserver unavailable: using -j1.  Add '+' to parent make rule.
make[1]: Entering directory '/usr/src/linux-headers-6.2.16-4-pve'
/bin/sh: 1: Syntax error: Unterminated quoted string
/bin/sh: 1: [: -lt: unexpected operator
  CC [M]  /var/lib/dkms/kernel-mft-dkms/4.17.0/build/mst_pci.o
  CC [M]  /var/lib/dkms/kernel-mft-dkms/4.17.0/build/mst_pciconf.o
/var/lib/dkms/kernel-mft-dkms/4.17.0/build/mst_pciconf.c: In function ‘close_dma’:
/var/lib/dkms/kernel-mft-dkms/4.17.0/build/mst_pciconf.c:601:13: error: implicit declaration of function ‘pci_unmap_single’; did you mean ‘dma_unmap_single’? [-Werror=implicit-function-declaration]
  601 |             pci_unmap_single(dev->pci_dev, dev->dma_props[i].dma_map, DMA_MBOX_SIZE, DMA_BIDIRECTIONAL);
      |             ^~~~~~~~~~~~~~~~
      |             dma_unmap_single
/var/lib/dkms/kernel-mft-dkms/4.17.0/build/mst_pciconf.c: In function ‘ioctl.isra’:
/var/lib/dkms/kernel-mft-dkms/4.17.0/build/mst_pciconf.c:1269:1: warning: the frame size of 1184 bytes is larger than 1024 bytes [-Wframe-larger-than=]
 1269 | }
      | ^
cc1: some warnings being treated as errors
make[2]: *** [scripts/Makefile.build:260: /var/lib/dkms/kernel-mft-dkms/4.17.0/build/mst_pciconf.o] Error 1
make[1]: *** [Makefile:2026: /var/lib/dkms/kernel-mft-dkms/4.17.0/build] Error 2
make[1]: Leaving directory '/usr/src/linux-headers-6.2.16-4-pve'
make: *** [Makefile:53: all] Error 2

Code:
# apt update && apt install pve-header

Hit:1 http://security.debian.org bookworm-security InRelease
Hit:2 http://ftp.ca.debian.org/debian bookworm InRelease
Get:3 http://ftp.ca.debian.org/debian bookworm-updates InRelease [52.1 kB]
Get:4 http://download.proxmox.com/debian/pve bookworm InRelease [2,768 B]
Hit:5 http://download.proxmox.com/debian/ceph-quincy bookworm InRelease
Get:6 http://download.proxmox.com/debian/pve bookworm/pve-no-subscription amd64 Packages [115 kB]
Fetched 170 kB in 1s (209 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
8 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package pve-header
 
Last edited:
Hello, I have an old quad port NIC (HP NC364T) in a system that in pve 7.4 had its ports named by 'ID_NET_NAME_PATH' which put out these names for the ports: enp4s0f1, enp4s0f0, enp3s0f1 and enp3s0f0.
But now, after upgrading to 8.0, due to systemd also taking into account ID_NET_NAME_SLOT, I was having some issues, as two of them would be detected in the form of ens1f0 and ens1f1 while the other two in the form of eth1 and eth4 because renaming them from eth1 or eth4 to ens1f0 or ens1f1 obviously failed since the name already existed.
I've copied /usr/lib/systemd/network/99-default.link to /etc/systemd/network/99-default.link and removed slot from NamePolicy=..., this brought back the old names.
I've now found that there's a kernel parameter that can be set, net.naming-scheme=v247 to keep the v247 naming scheme, but I have not tested this yet, it might be a better future-proof solution in case systemd decides to change stuff again? :)
LE: I've removed 99-default.link put by me in /etc/systemd/network/ and tested the net.naming-scheme=v247 kernel parameter, works fine. I'll keep this for now.
 
Last edited:
  • Like
Reactions: donhwyo
I've now found that there's a kernel parameter that can be set, net.naming-scheme=v247 to keep the v247 naming scheme, but I have not tested this yet, it might be a better future-proof solution in case systemd decides to change stuff again? :)
LE: I've removed 99-default.link put by me in /etc/systemd/network/ and tested the net.naming-scheme=v247 kernel parameter, works fine. I'll keep this for now.
This is interesting. Where did you find info about "net.naming-scheme="? Google is failing me. lol
Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!