Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

you need:
amd_pstate=passive

sometimes
amd_pstate.shared_mem=1 amd_pstate=passive

in grub line
greetz

Thanks, will read on that.
But I am wondering why amd-pstate was the (seemingly) default with 5.19 for the 5700G but not for the 5950X and now with 6.1 it is (seemingly) not anymore even for the 5700G.
As said, I can not test/check with 6.1 on the 5950X yet.
 
Thanks, will read on that.
But I am wondering why amd-pstate was the (seemingly) default with 5.19 for the 5700G but not for the 5950X and now with 6.1 it is (seemingly) not anymore even for the 5700G.
As said, I can not test/check with 6.1 on the 5950X yet.

because of this:
https://www.phoronix.com/news/Linux-6.1-rc7-Easier-AMD-Pstate
AMD isn't yet encouraging the amd_pstate driver to be used by default but the plan for that is to happen once AMD P-State EPP is ready and merged. The AMD P-State EPP code should resolve some known performance issues with the current amd_pstate code.
 
  • Like
Reactions: Neobin
After installing Kernel 6.1 every Docker in LXC stopped working with

[graphdriver] prior storage driver overlay2 failed: driver not supported

the config of every LXC has the the following parameters:

Code:
features: nesting=1
lxc.cgroup2.devices.allow: a
lxc.cap.drop:

even after rebooting to Kernel 5.19 or 5.15 the Problem still exists - only a restore of a backup helped me to get rid of the Problem.

And: i know - Docker in LXC isn't the best option - i'll change that asap and will transfer docker-lxcs into vms with docker
 
Testing whith 5950x but cant install zenpower3 need linux-headers-6.1.0-1-pve

E: Unable to locate package linux-headers-6.1.0-1-pve
E: Couldn't find any package by glob 'linux-headers-6.1.0-1-pve'
 
E: Unable to locate package linux-headers-6.1.0-1-pve
E: Couldn't find any package by glob 'linux-headers-6.1.0-1-pve'
The headers package of pve-kernel is pve-headers-<abi-version> - install the meta-package for the 6.1 kernel:
apt install pve-headers-6.1 (this should pull in `pve-headers-6.1.0-1-pve`)

I hope this helps!
 
The 6.1 based kernel may be useful for some (especially newer) setups, for example if there is improved hardware support that has not yet been backported to 5.15.

Or improved microcode handling ;) . I'm running an old amd SMT system and I was following this patch that now got merged in kernel 6.1:
https://git.kernel.org/pub/scm/linu.../?id=e7ad18d1169c62e6c78c01ff693fd362d9d65278
Background: https://www.phoronix.com/news/AMD-CPU-Linux-Microcode-Thread and https://bugzilla.kernel.org/show_bug.cgi?id=216211

I just installed and tested according to the info in the bug report and LWP is now disabled on all cores, was not on <6.1. So far so good.
 
I guess proxmox-boot-tool only automatically select kernels from the latest two versions. You can add the 5.15 kernel with proxmox-boot-tool kernel add 5.15.74-1-pve and then proxmox-boot-tool refresh.
You are right, 5.15 was still installed:
Code:
root@pvetest01:~# dpkg-query --list | grep pve-kernel
ii  pve-firmware                         3.6-1                          all          Binary firmware code for the pve-kernel
ii  pve-kernel-5.15                      7.2-14                         all          Latest Proxmox VE Kernel Image
ii  pve-kernel-5.15.74-1-pve             5.15.74-1                      amd64        Proxmox Kernel Image
ii  pve-kernel-5.19                      7.2-14                         all          Latest Proxmox VE Kernel Image
ii  pve-kernel-5.19.17-1-pve             5.19.17-1                      amd64        Proxmox Kernel Image
ii  pve-kernel-6.1                       7.3-1                          all          Latest Proxmox VE Kernel Image
ii  pve-kernel-6.1.0-1-pve               6.1.0-1                        amd64        Proxmox Kernel Image
ii  pve-kernel-helper                    7.3-1                          all          Function for various kernel maintenance tasks.

That's why pveversion -v still reports v5.15 as installed. It just isn´t available anymore as boot option, but your advice in #15 fixes that. Thanks.
 
The headers package of pve-kernel is pve-headers-<abi-version> - install the meta-package for the 6.1 kernel:
apt install pve-headers-6.1 (this should pull in `pve-headers-6.1.0-1-pve`)

I hope this helps!
thanks it worked whit zenpower3 now, but for nvidia v-gpu get:
ERROR: An error occurred while performing the step: "Building kernel modules". See/var/log/nvidia-installer.log for details.
ERROR: An error occurred while performing the step: "Checking to see whether the nvidia-vgpu-vfio kernel module was successfully built". See /var/log/nvidia-installer.log for details.

/tmp/selfgz96890/NVIDIA-Linux-x86_64-525.60.12-vgpu-kvm-custom/kernel/nvidia-vgpu-vfio/nvidia-vgpu-vfio.c:3497:9: warning: statement with no effect [-Wu>
3497 | ret = vfio_unpin_pages(vgpu_dev->dev, tgpfn_buffer, cnt);
| ^~~
cc1: some warnings being treated as errors
/tmp/selfgz96890/NVIDIA-Linux-x86_64-525.60.12-vgpu-kvm-custom/kernel/nvidia-vgpu-vfio/nvidia-vgpu-vfio.c: In function 'nv_vgpu_probe':
/tmp/selfgz96890/NVIDIA-Linux-x86_64-525.60.12-vgpu-kvm-custom/kernel/nvidia-vgpu-vfio/nvidia-vgpu-vfio.c:4841:13: error: implicit declaration of functi>
4841 | if (mdev_register_device(&pdev->dev, phys_dev->vgpu_fops) != 0)
| ^~~~~~~~~~~~~~~~~~~~
| mdev_register_driver
make[2]: *** [scripts/Makefile.build:258: /tmp/selfgz96890/NVIDIA-Linux-x86_64-525.60.12-vgpu-kvm-custom/kernel/nvidia-vgpu-vfio/vgpu-devices.o] Error 1
/tmp/selfgz96890/NVIDIA-Linux-x86_64-525.60.12-vgpu-kvm-custom/kernel/nvidia-vgpu-vfio/nvidia-vgpu-vfio.c:4860:5: error: implicit declaration of functio>
4860 | mdev_unregister_device(&pdev->dev);
| ^~~~~~~~~~~~~~~~~~~~~~
| mdev_unregister_driver
/tmp/selfgz96890/NVIDIA-Linux-x86_64-525.60.12-vgpu-kvm-custom/kernel/nvidia-vgpu-vfio/nvidia-vgpu-vfio.c: In function 'nv_get_device':
/tmp/selfgz96890/NVIDIA-Linux-x86_64-525.60.12-vgpu-kvm-custom/kernel/nvidia-vgpu-vfio/nvidia-vgpu-vfio.c:358:1: error: control reaches end of non-void >
358 | }
| ^
cc1: some warnings being treated as errors
make[2]: *** [scripts/Makefile.build:258: /tmp/selfgz96890/NVIDIA-Linux-x86_64-525.60.12-vgpu-kvm-custom/kernel/nvidia-vgpu-vfio/nvidia-vgpu-vfio.o] Err>
make[2]: Target '/tmp/selfgz96890/NVIDIA-Linux-x86_64-525.60.12-vgpu-kvm-custom/kernel/' not remade because of errors.
make[1]: *** [Makefile:1996: /tmp/selfgz96890/NVIDIA-Linux-x86_64-525.60.12-vgpu-kvm-custom/kernel] Error 2
make[1]: Target 'modules' not remade because of errors.
make[1]: Leaving directory '/usr/src/linux-headers-6.1.0-1-pve'
make: *** [Makefile:82: modules] Error 2
ERROR: The nvidia-vgpu-vfio kernel module was not created.
ERROR: Installation has failed. Please see the file '/var/log/nvidia-installer.log' for details. You may find suggestions on fixing installation probl>

lsmod | grep 'nvidia\|vfio'
vfio_pci 16384 4
vfio_pci_core 77824 1 vfio_pci
vfio_virqfd 16384 1 vfio_pci_core
irqbypass 16384 127 vfio_pci_core,kvm
vfio_iommu_type1 40960 1
vfio 45056 10 vfio_pci_core,vfio_iommu_type1,vfio_pci
root@pve:~# find /usr/lib/modules -name "*.ko" | grep -i nvidia
/usr/lib/modules/5.15.60-2-pve/kernel/drivers/platform/x86/nvidia-wmi-ec-backlight.ko
/usr/lib/modules/5.15.60-2-pve/kernel/drivers/usb/typec/altmodes/typec_nvidia.ko
/usr/lib/modules/5.15.60-2-pve/kernel/drivers/i2c/busses/i2c-nvidia-gpu.ko
/usr/lib/modules/5.15.60-2-pve/kernel/drivers/video/fbdev/nvidia/nvidiafb.ko
/usr/lib/modules/5.15.60-2-pve/kernel/drivers/net/ethernet/nvidia/forcedeth.ko
/usr/lib/modules/5.19.17-1-pve/kernel/drivers/platform/x86/nvidia-wmi-ec-backlight.ko
/usr/lib/modules/5.19.17-1-pve/kernel/drivers/usb/typec/altmodes/typec_nvidia.ko
/usr/lib/modules/5.19.17-1-pve/kernel/drivers/i2c/busses/i2c-nvidia-gpu.ko
/usr/lib/modules/5.19.17-1-pve/kernel/drivers/video/fbdev/nvidia/nvidiafb.ko
/usr/lib/modules/5.19.17-1-pve/kernel/drivers/net/ethernet/nvidia/forcedeth.ko
/usr/lib/modules/6.1.0-1-pve/kernel/drivers/platform/x86/nvidia-wmi-ec-backlight.ko
/usr/lib/modules/6.1.0-1-pve/kernel/drivers/usb/typec/altmodes/typec_nvidia.ko
/usr/lib/modules/6.1.0-1-pve/kernel/drivers/i2c/busses/i2c-nvidia-gpu.ko
/usr/lib/modules/6.1.0-1-pve/kernel/drivers/video/fbdev/nvidia/nvidiafb.ko
/usr/lib/modules/6.1.0-1-pve/kernel/drivers/net/ethernet/nvidia/forcedeth.ko
/usr/lib/modules/5.15.74-1-pve/kernel/drivers/platform/x86/nvidia-wmi-ec-backlight.ko
/usr/lib/modules/5.15.74-1-pve/kernel/drivers/usb/typec/altmodes/typec_nvidia.ko
/usr/lib/modules/5.15.74-1-pve/kernel/drivers/i2c/busses/i2c-nvidia-gpu.ko
/usr/lib/modules/5.15.74-1-pve/kernel/drivers/video/fbdev/nvidia/nvidiafb.ko
/usr/lib/modules/5.15.74-1-pve/kernel/drivers/net/ethernet/nvidia/forcedeth.ko


Pass-Through works whit this kernel but v-gpu still problem.
5950x on x570m pro4
dmesg --level=err,warn
[ 0.000000] secureboot: Secure boot could not be determined (mode 0)
[ 0.004257] secureboot: Secure boot could not be determined (mode 0)
[ 0.709618] #17 #18 #19 #20 #21 #22 #23 #24 #25 #26 #27 #28 #29 #30 #31
[ 1.304274] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ 1.304316] platform eisa.0: EISA: Cannot allocate resource for mainboard
[ 1.304317] platform eisa.0: Cannot allocate resource for EISA slot 1
[ 1.304317] platform eisa.0: Cannot allocate resource for EISA slot 2
[ 1.304318] platform eisa.0: Cannot allocate resource for EISA slot 3
[ 1.304318] platform eisa.0: Cannot allocate resource for EISA slot 4
[ 1.304319] platform eisa.0: Cannot allocate resource for EISA slot 5
[ 1.304319] platform eisa.0: Cannot allocate resource for EISA slot 6
[ 1.304320] platform eisa.0: Cannot allocate resource for EISA slot 7
[ 1.304320] platform eisa.0: Cannot allocate resource for EISA slot 8
[ 1.687057] usb: port power management may be unreliable
[ 2.004726] nvme nvme0: missing or invalid SUBNQN field.
[ 15.760570] zenpower: loading out-of-tree module taints kernel.
[ 15.990545] znvpair: module license 'CDDL' taints kernel.
[ 15.990547] Disabling lock debugging due to kernel taint
[ 18.907063] usb 7-4: Warning! Unlikely big volume range (=3072), cval->res is probably wrong.
[ 18.907066] usb 7-4: [5] FU [Mic Capture Volume] ch = 1, val = 4608/7680/1
[ 48.124086] xhci_hcd 0000:08:00.2: xHC error in resume, USBSTS 0x401, Reinit
[ 53.601655] hrtimer: interrupt took 4809 ns




kernel 5.15:
CPU BOGOMIPS: 217197.12
REGEX/SECOND: 2819574
HD SIZE: 93.93 GB (/dev/mapper/pve-root)
BUFFERED READS: 216.59 MB/sec
AVERAGE SEEK TIME: 0.09 ms
FSYNCS/SECOND: 1473.99
DNS EXT: 52.72 ms
DNS INT: 84.88 ms (neutrino-digital.com)


kerne l 6.1:
CPU BOGOMIPS: 217183.36
REGEX/SECOND: 4898363
HD SIZE: 93.93 GB (/dev/mapper/pve-root)
BUFFERED READS: 239.50 MB/sec
AVERAGE SEEK TIME: 0.08 ms
FSYNCS/SECOND: 1515.82
DNS EXT: 57.68 ms
DNS INT: 87.66 ms (neutrino-digital.com)

stickerInteractive.php
 
Last edited:
Single GPU passthrough on Asus PRIME X570-PRO with vega64 and vendor reset works.

But a ton of ugly messages:
https://pastebin.com/2jAajxfv


0a:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Vega 10 XL/XT [Radeon RX Vega 56/64] [1002:687f] (rev c1)
Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] RX Vega64 [1002:6b76]
Kernel driver in use: vfio-pci
Kernel modules: amdgpu
0a:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Vega 10 HDMI Audio [Radeon Vega 56/64] [1002:aaf8]
Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Vega 10 HDMI Audio [Radeon Vega 56/64] [1002:aaf8]
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
 
  • Like
Reactions: EdnanCosta
I have problems with my secondary GPU passthrough setup, the passthrough works nice in 5.19, but in 6.1 after the vfio-pci module loaded, the primary GPU output (efifb) stops working.
There is a strange message in dmesg when loading vfio-pci: " Console: colour dummy device 80x25".
The vfio-pci is configured for specific device ids: "options vfio-pci ids=10de:1f08,10de:10f9,10de:1ada,10de:1adb"
 
Is this Kernel suggested for handle better CPU resource allocation Intel 12th/13th gen?
thx
 
Last edited:
When using this kernel, I have problems restoring from my Proxmox Backup Server. I'm only able to restore after rebooting the Proxmox node with kernel 5.19. Otherwise, an "Error" dialog box pops up with the following:
proxmox-backup-client failed: Error: Operation not supported (os error 95) (500)

I can click "OK" and the dialog box disappears, but continuing with an attempt to restore yields messages like the following in the Task Viewer:
recovering backed-up configuration from 'pbs1:backup/ct/100/2022-12-17T05:15:02Z'
Error: Operation not supported (os error 95)
TASK ERROR: unable to restore CT 199 - command '/usr/bin/proxmox-backup-client restore '--crypt-mode=none' ct/100/2022-12-17T05:15:02Z pct.conf - --repository root@pam@pbs1:pbs1' failed: exit code 255.
 
Switched to pve-kernel-6.1 yesterday, from 5.15. Apparently, something in 6.1 messes with the device IDs of one single storage device:

After restart, I noticed a delay in booting the system, so I looked at the console and noticed a "A start job is running for dev-disk-by-id /dev/disk/by-id/nvme-eui.6479a7311269019b-part4" notification from systemd, with a timeout of 1m30s. After this expired, boot continued fine - but the delay shows up on subsequent reboots with 6.1. Rebooting with the previous 5.15 kernel fixed this immediately.

I have a ZFS root pool rpool consisting of two mirrored NVMe devices, and another SATA SSD-based storage pool. Both pools were set up with disk IDs, i.e. rpool has been using nvme-eui.0026b7683b8e8485-part4 and nvme-eui.6479a7311269019b-part4 so far. When looking up the mentioned disk IDs, I only see one of the two NVMe devices showing up with the nvme-eui. syntax and the other one appearing as nvme-nvme.<aveeeeerylongserial> all of a sudden:

Code:
 » ls -l /dev/disk/by-id/nvme*
lrwxrwxrwx 1 root root 13 Dec 18 12:25 /dev/disk/by-id/nvme-KINGSTON_SA2000M8250G_50026B7683B8E848 -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Dec 18 12:25 /dev/disk/by-id/nvme-KINGSTON_SA2000M8250G_50026B7683B8E848-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Dec 18 12:25 /dev/disk/by-id/nvme-KINGSTON_SA2000M8250G_50026B7683B8E848-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 15 Dec 18 12:25 /dev/disk/by-id/nvme-KINGSTON_SA2000M8250G_50026B7683B8E848-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root 15 Dec 18 12:25 /dev/disk/by-id/nvme-KINGSTON_SA2000M8250G_50026B7683B8E848-part4 -> ../../nvme0n1p4
lrwxrwxrwx 1 root root 13 Dec 18 12:25 /dev/disk/by-id/nvme-PNY_CS3030_250GB_SSD_PNY09200003790100411 -> ../../nvme1n1
lrwxrwxrwx 1 root root 15 Dec 18 12:25 /dev/disk/by-id/nvme-PNY_CS3030_250GB_SSD_PNY09200003790100411-part1 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root 15 Dec 18 12:25 /dev/disk/by-id/nvme-PNY_CS3030_250GB_SSD_PNY09200003790100411-part2 -> ../../nvme1n1p2
lrwxrwxrwx 1 root root 15 Dec 18 12:25 /dev/disk/by-id/nvme-PNY_CS3030_250GB_SSD_PNY09200003790100411-part3 -> ../../nvme1n1p3
lrwxrwxrwx 1 root root 15 Dec 18 12:25 /dev/disk/by-id/nvme-PNY_CS3030_250GB_SSD_PNY09200003790100411-part4 -> ../../nvme1n1p4
lrwxrwxrwx 1 root root 13 Dec 18 12:25 /dev/disk/by-id/nvme-eui.0026b7683b8e8485 -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Dec 18 12:25 /dev/disk/by-id/nvme-eui.0026b7683b8e8485-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Dec 18 12:25 /dev/disk/by-id/nvme-eui.0026b7683b8e8485-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 15 Dec 18 12:25 /dev/disk/by-id/nvme-eui.0026b7683b8e8485-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root 15 Dec 18 12:25 /dev/disk/by-id/nvme-eui.0026b7683b8e8485-part4 -> ../../nvme0n1p4
lrwxrwxrwx 1 root root 13 Dec 18 12:25 /dev/disk/by-id/nvme-nvme.1987-504e593039323030303033373930313030343131-504e592043533330333020323530474220535344-00000001 -> ../../nvme1n1
lrwxrwxrwx 1 root root 15 Dec 18 12:25 /dev/disk/by-id/nvme-nvme.1987-504e593039323030303033373930313030343131-504e592043533330333020323530474220535344-00000001-part1 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root 15 Dec 18 12:25 /dev/disk/by-id/nvme-nvme.1987-504e593039323030303033373930313030343131-504e592043533330333020323530474220535344-00000001-part2 -> ../../nvme1n1p2
lrwxrwxrwx 1 root root 15 Dec 18 12:25 /dev/disk/by-id/nvme-nvme.1987-504e593039323030303033373930313030343131-504e592043533330333020323530474220535344-00000001-part3 -> ../../nvme1n1p3
lrwxrwxrwx 1 root root 15 Dec 18 12:25 /dev/disk/by-id/nvme-nvme.1987-504e593039323030303033373930313030343131-504e592043533330333020323530474220535344-00000001-part4 -> ../../nvme1n1p4

This leads to the situation that the rpool is no longer imported using device IDs for both vdevs (i.e. under 5.15 both imported with nvme-eui.xxx):
Code:
 » zpool status rpool
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:17 with 0 errors on Sun Dec 11 00:24:18 2022
config:

    NAME                                 STATE     READ WRITE CKSUM
    rpool                                ONLINE       0     0     0
      mirror-0                           ONLINE       0     0     0
        nvme-eui.0026b7683b8e8485-part3  ONLINE       0     0     0
        nvme1n1p3                        ONLINE       0     0     0

errors: No known data errors

The other storage pool is behaving just fine. Any ideas a) why this one device changes its naming convention unter 6.1, b1) how to potentially revert this, OR b2) how to fix the ongoing boot delay by switching to other device IDs (e.g. the ones with nvme-KINGSTON_SA20... and nvme-PNY_CS30, considering is the root pool which cannot be easily exported and re-imported?

Thanks and regards
 
Maybe useful to know, maybe not worth to debug: On another machine it completely dies at boot screen with the 6.1.0-1-pve kernel. Pressing numlock at this point reveals it, only hard reset possible.
It is a cheapo machine with a crippled BIOS. OS Select on WIN8 activates UEFI and secure boot in the backgroudn without any info anywhere. As I remember, it was the only possible selection to get the proxmox.iso to boot at all.

5.15.74-1-pve booted just fine in all cases.

Switching to WIN7/other reveals a submenu 'CSM' and from there I switched CSM off. Then it boots fine on 6.1.0-1-pve, but without the message of enabled secure boot like on screenshot1

root@ps07:~# cat /etc/apt/sources.list && pveversion -v && proxmox-boot-tool kernel list
deb http://ftp.de.debian.org/debian bullseye main contrib

deb http://ftp.de.debian.org/debian bullseye-updates main contrib

# security updates
deb http://security.debian.org bullseye-security main contrib

deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription

proxmox-ve: 7.3-1 (running kernel: 6.1.0-1-pve)
pve-manager: 7.3-3 (running version: 7.3-3/c3928077)
pve-kernel-6.1: 7.3-1
pve-kernel-helper: 7.3-1
pve-kernel-5.15: 7.2-14
pve-kernel-6.1.0-1-pve: 6.1.0-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph: 17.2.5-pve1
ceph-fuse: 17.2.5-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.3
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-1
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
openvswitch-switch: 2.15.0+ds1-2+deb11u1
proxmox-backup-client: 2.3.1-1
proxmox-backup-file-restore: 2.3.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.6-1
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-1
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1
Manually selected kernels:
None.

Automatically selected kernels:
5.15.74-1-pve
6.1.0-1-pve
root@ps07:~#
 

Attachments

  • IMG_20221218_213844.jpg
    IMG_20221218_213844.jpg
    23 KB · Views: 19
  • IMG_20221218_213924.jpg
    IMG_20221218_213924.jpg
    58.7 KB · Views: 20
  • IMG_20221218_213957.jpg
    IMG_20221218_213957.jpg
    31.7 KB · Views: 22
Last edited:
When using this kernel, I have problems restoring from my Proxmox Backup Server. I'm only able to restore after rebooting the Proxmox node with kernel 5.19. Otherwise, an "Error" dialog box pops up with the following:
proxmox-backup-client failed: Error: Operation not supported (os error 95) (500)

I can click "OK" and the dialog box disappears, but continuing with an attempt to restore yields messages like the following in the Task Viewer:
recovering backed-up configuration from 'pbs1:backup/ct/100/2022-12-17T05:15:02Z'
Error: Operation not supported (os error 95)
TASK ERROR: unable to restore CT 199 - command '/usr/bin/proxmox-backup-client restore '--crypt-mode=none' ct/100/2022-12-17T05:15:02Z pct.conf - --repository root@pam@pbs1:pbs1' failed: exit code 255.

I have the same here with kernel 6.1.

I have two nodes in a cluster. One was already on 6.1, the other was still on 5.19. Both have PBS installed aside of PVE. The PBS-storage that is added in PVE for both nodes is on the 6.1-node.
Trying to restore any VM or LXC by selecting the PBS-storage added in PVE on the 6.1-node leads to the mentioned errors, while restoring the same backup by selecting the same PBS-storage added in PVE on the 5.19-node worked as expected:
Screenshot 2022-12-18 203415.png

Edit1: Installed 6.1 in the meantime on the other node too and it now shows the same misbehavior.

If the error occurs:
Screenshot 2022-12-18 201008.png
after clicking "OK", the four fields at the bottom are also not prefilled out:
Screenshot 2022-12-18 201121.png

Bash:
proxmox-backup-client failed: Error: Operation not supported (os error 95) (500)
Bash:
Error: Operation not supported (os error 95)
error before or during data restore, some or all disks were not completely restored. VM 300 state is NOT cleaned up.
TASK ERROR: command '/usr/bin/proxmox-backup-client restore '--crypt-mode=none' vm/500/2022-12-17T14:40:25Z qemu-server.conf /var/tmp/vzdumptmp698950/qemu-server.conf --repository pbs-backup@pbs@192.168.1.6:local --ns sync' failed: exit code 255
Bash:
Dec 18 19:45:37 pve pvedaemon[4807]: <root@pam> starting task UPID:pve:000AAA46:00979A7D:639F5FD1:qmrestore:300:root@pam:
Dec 18 19:45:37 pve pvedaemon[698950]: error before or during data restore, some or all disks were not completely restored. VM 300 state is NOT cleaned up.
Dec 18 19:45:37 pve pvedaemon[698950]: command '/usr/bin/proxmox-backup-client restore '--crypt-mode=none' vm/500/2022-12-17T14:40:25Z qemu-server.conf /var/tmp/vzdumptmp698950/qemu-server.conf --repository pbs-backup@pbs@192.168.1.6:local --ns sync' failed: exit code 255
Dec 18 19:45:37 pve pvedaemon[4807]: <root@pam> end task UPID:pve:000AAA46:00979A7D:639F5FD1:qmrestore:300:root@pam: command '/usr/bin/proxmox-backup-client restore '--crypt-mode=none' vm/500/2022-12-17T14:40:25Z qemu-server.conf /var/tmp/vzdumptmp698950/qemu-server.conf --repository pbs-backup@pbs@192.168.1.6:local --ns sync' failed: exit code 255
Bash:
Dec 18 19:45:37 pbs proxmox-backup-proxy[3074]: starting new backup reader datastore 'local': "/rpool/datastore/local"
Dec 18 19:45:37 pbs proxmox-backup-proxy[3074]: protocol upgrade done
Dec 18 19:45:37 pbs proxmox-backup-proxy[3074]: GET /download
Dec 18 19:45:37 pbs proxmox-backup-proxy[3074]: download "/rpool/datastore/local/ns/sync/vm/500/2022-12-17T14:40:25Z/index.json.blob"
Dec 18 19:45:37 pbs proxmox-backup-proxy[3074]: reader finished successfully
Dec 18 19:45:37 pbs proxmox-backup-proxy[3074]: TASK OK
Dec 18 19:45:37 pbs proxmox-backup-proxy[3074]: starting new backup reader datastore 'local': "/rpool/datastore/local"
Dec 18 19:45:37 pbs proxmox-backup-proxy[3074]: protocol upgrade done
Dec 18 19:45:37 pbs proxmox-backup-proxy[3074]: GET /download
Dec 18 19:45:37 pbs proxmox-backup-proxy[3074]: download "/rpool/datastore/local/ns/sync/vm/500/2022-12-17T14:40:25Z/index.json.blob"
Dec 18 19:45:37 pbs proxmox-backup-proxy[3074]: reader finished successfully
Dec 18 19:45:37 pbs proxmox-backup-proxy[3074]: TASK OK
Bash:
proxmox-ve: 7.3-1 (running kernel: 6.1.0-1-pve)
pve-manager: 7.3-3 (running version: 7.3-3/c3928077)
pve-kernel-6.1: 7.3-1
pve-kernel-helper: 7.3-1
pve-kernel-5.15: 7.2-14
pve-kernel-6.1.0-1-pve: 6.1.0-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 10.1-3~bpo11+1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.3
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-1
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.1-1
proxmox-backup-file-restore: 2.3.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.6-1
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-1
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1
Bash:
proxmox-backup-manager versions --verbose
proxmox-backup                unknown      running kernel: 6.1.0-1-pve
proxmox-backup-server         2.3.1-1      running version: 2.3.1
pve-kernel-6.1                7.3-1
pve-kernel-helper             7.3-1
pve-kernel-5.15               7.2-14
pve-kernel-6.1.0-1-pve        6.1.0-1
pve-kernel-5.15.74-1-pve      5.15.74-1
ifupdown2                     3.1.0-1+pmx3
libjs-extjs                   7.0.0-1
proxmox-backup-docs           2.3.1-1
proxmox-backup-client         2.3.1-1
proxmox-mini-journalreader    1.3-1
proxmox-offline-mirror-helper 0.5.0-1
proxmox-widget-toolkit        3.5.3
pve-xtermjs                   4.16.0-1
smartmontools                 7.2-pve3
zfsutils-linux                2.1.6-pve1

I only realized the above, after I got a failed backup before and because of this, trying some things out:
Bash:
INFO: Starting Backup of VM 500 (qemu)
INFO: Backup started at 2022-12-17 00:15:02
INFO: status = running
INFO: VM Name: pbs-pfSense
INFO: include disk 'scsi0' 'local-zfs:vm-500-disk-0' 16G
INFO: include disk 'efidisk0' 'local-zfs:vm-500-disk-1' 1M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/500/2022-12-16T23:15:02Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
ERROR: VM 500 qmp command 'backup' failed - backup register image failed: command error: no previous backup found, cannot do incremental backup
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 500 failed - VM 500 qmp command 'backup' failed - backup register image failed: command error: no previous backup found, cannot do incremental backup
INFO: Failed at 2022-12-17 00:15:02
The backup of this VM works only the first time after the VM was started. Every backup afterwards leads to the above failure. (Worked for months before; no changes made on/to the VM.)
This VM runs on the 6.1-node which also has the PBS-storage on it. It is the only VM on that node. The other two LXCs on that node backup fine.

Edit2: Since tonight all my VM backups failed (LXC backups are fine) with the above mentioned error, I installed 5.19 on both nodes again. Pinning with the proxmox-boot-tool did also not work; so I removed 6.1 for now.
Now back again on 5.19 on both nodes, guess what: Not only the restores work again, but also the backup error is completely gone for all VMs!
 
Last edited:
When checking this issue with PBS backup/restore and the opt-in 6.1 kernel we managed to reproduce it on some setups.
That would be those with /tmp located on ZFS (normal if whole root file system is on ZFS).
There, the open call with the O_TMPFILE flag set, for downloading the previous backup index for incremental backup, fails with EOPNOTSUPP 95 Operation not supported.

It seems ZFS 2.1.7 received still misses some compat with the 6.1 kernel, which reworked parts of the VFS layer w.r.t. tempfile handling. We notified ZFS upstream with a minimal reproducer for now and will look into providing a stop-gap fix for this if upstream needs more time to handle it as they deem correctly.

Until then we recommend to either avoid the initial pve-kernel-6.1.0-1-pve package if your having your root filesystem on ZFS, or move the /tmp directory away from ZFS. e.g., by making it a tmpfs mount.
 
When checking this issue with PBS backup/restore and the opt-in 6.1 kernel we managed to reproduce it on some setups.
That would be those with /tmp located on ZFS (normal if whole root file system is on ZFS).
There, the open call with the O_TMPFILE flag set, for downloading the previous backup index for incremental backup, fails with EOPNOTSUPP 95 Operation not supported.

It seems ZFS 2.1.7 received still misses some compat with the 6.1 kernel, which reworked parts of the VFS layer w.r.t. tempfile handling. We notified ZFS upstream with a minimal reproducer for now and will look into providing a stop-gap fix for this if upstream needs more time to handle it as they deem correctly.

Until then we recommend to either avoid the initial pve-kernel-6.1.0-1-pve package if your having your root filesystem on ZFS, or move the /tmp directory away from ZFS. e.g., by making it a tmpfs mount.

Thank you very much for the very fast investigation and response! :)
 
When checking this issue with PBS backup/restore and the opt-in 6.1 kernel we managed to reproduce it on some setups.
That would be those with /tmp located on ZFS (normal if whole root file system is on ZFS).
There, the open call with the O_TMPFILE flag set, for downloading the previous backup index for incremental backup, fails with EOPNOTSUPP 95 Operation not supported.

It seems ZFS 2.1.7 received still misses some compat with the 6.1 kernel, which reworked parts of the VFS layer w.r.t. tempfile handling. We notified ZFS upstream with a minimal reproducer for now and will look into providing a stop-gap fix for this if upstream needs more time to handle it as they deem correctly.

Until then we recommend to either avoid the initial pve-kernel-6.1.0-1-pve package if your having your root filesystem on ZFS, or move the /tmp directory away from ZFS. e.g., by making it a tmpfs mount.
I solved it by mounting /tmp on tmpfs. Thank you.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!