[TUTORIAL] Configuring Fusion-Io (SanDisk) ioDrive, ioDrive2, ioScale and ioScale2 cards with Proxmox

Hopefully this will help someone down the road. I have it working for Promox 7 but the performance is horrible. My drive was too warn to utilize multiple controllers, so I couldn't verify if that'll help. I'll just use this drive for Hyper-v instead.

If you still want to use this drive for Proxmox 7, perform the following steps

# Step 1. Install iomemory-vsl
Code:
apt update && apt --assume-yes install gcc fakeroot build-essential debhelper rsync dkms zip unzip git pve-headers pve-headers-`uname -r` && apt --assume-yes upgrade && apt --assume-yes autoremove && \
mkdir /home/temp  && cd /home/temp  && \
git clone https://github.com/snuf/iomemory-vsl && \
cd iomemory-vsl && \
make dkms

# Step 2. Grub Changes (MANDATORY!!!)
# WARNING: You must perform this step if you want your machine to boot afterwards
* Open the following file:
Code:
nano /etc/default/grub

# For Intel CPUs ONLY, modify GRUB_CMDLINE_LINUX_DEFAULT to
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt"

# For AMD CPUs ONLY, modify GRUB_CMDLINE_LINUX_DEFAULT to
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"

* Save the file

# Step 3. Install Fio Utils
Code:
cd /home/temp && \
wget -O fio-common_3.2.16.1731-1.0_amd64.deb https://www.dropbox.com/s/pd2ohfaufhwqc34/fio-common_3.2.16.1731-1.0_amd64.deb?dl=1 && \
wget -O fio-firmware-fusion_3.2.16.20180821-1_all.deb https://www.dropbox.com/s/kcn5agi6lyikicf/fio-firmware-fusion_3.2.16.20180821-1_all.deb?dl=1 && \
wget -O fio-sysvinit_3.2.16.1731-1.0_all.deb https://www.dropbox.com/s/g39l6lg9of6eqze/fio-sysvinit_3.2.16.1731-1.0_all.deb?dl=1 && \
wget -O fio-util_3.2.16.1731-1.0_amd64.deb https://www.dropbox.com/s/57huby17mteg6wp/fio-util_3.2.16.1731-1.0_amd64.deb?dl=1 && \
dpkg -i fio-firmware-fusion_3.2.16.20180821-1_all.deb fio-util_3.2.16.1731-1.0_amd64.deb fio-sysvinit_3.2.16.1731-1.0_all.deb fio-common_3.2.16.1731-1.0_amd64.deb

# (Optional) Step 4. Update Power from 25Watts to 35Watts
Code:
nano /etc/modprobe.d/iomemory-vsl.conf

# Add the following line:
# The * will set this for ALL of your Fusion devices. If you have more than one device, you would replace the * with the serial number of your device, which can be found using: fio-status -a
Code:
options iomemory-vsl external_power_override=*:35

# Save the file

# Reboot your system
Code:
update-initramfs -u && reboot

# Step 5. Create a LVM-Thin group that is visible within Proxmox

## Step 5a: Delete any existing partitions and Create a new one
* If you run fio-status -a and you notice a fioa, you'll need to delete said partition
* run the following command:
Code:
fdisk /dev/fioa

type d to delete partition
type n to create new partition (press 'y' and accept the defaults)
type w to write changes

## Step 5b. Create Volume
Code:
pvcreate /dev/fioa1

## Step 5c. Create a volume group
Code:
vgcreate fusion /dev/fioa1

## Step 5d. Create a LVM-Thin pool
Code:
lvcreate -l 100%FREE --thinpool lvfusion fusion

The LVM-Thin volume should be present in Promox. You may need to go to Datacenter - Storage and add it there.



If you have a card that's relatively new. You can split this single drive into multiple controllers. I couldn't test the performance impact of this change as my drive is too worn. Theoretically, you should double your IOPs if it's formatted in 512b or up to 80% formatted in 4k. I did NOT test this as I grabbed this information from SanDisk's documentation.

# NOTE YOU WILL LOSE ALL YOUR DATA ON YOUR DRIVE DOING THIS
# Step 1. Detach drive
Code:
fio-detach /dev/fct0

# Step 2. Split drive into multiple controllers
You may need to update the path to match your firmware version but /usr/share/fio/firmware is the default location
Code:
fio-update-iodrive --split -d /dev/fct0 /usr/share/fio/firmware/fusion_3.2.16-20180821.fff

# Step 3. Reboot
Code:
reboot

# Step 4. Format the disks
If everything worked, you should notice a /dev/fct0 and /dev/fct1 within fio-status -a

Run the following command to format said devices
Code:
fio-format /dev/fct0 /dev/fct1

# Step 5. Create RAID 0
If the devices are not attached, you may need to run the following commands:
Code:
fio-attach /dev/fct0
fio-attach /dev/fct1

To create a RAID 0 with the split controllers, we'll need to create two volumes
Code:
pvcreate /dev/fioa /dev/fiob

Code:
# Create a volume group for said volumes
vgcreate fusion-vg /dev/fioa /dev/fiob

Code:
# Create a LVM Pool (-i2 indicates 2 stripes)
lvcreate -l 100%VG -n fusion-lv -i2 fusion-vg
 
Last edited:
Nice, i just have two drives as a zfs mirror and performance seems pretty OK.
At least it is way faster then my sata ssd's
 
Last edited:
hello,

any news about update to proxmox 8 / kernel 6.2?

trying to upgrade previously fuctional ioscale 3.2tb on proxmox 7 to proxmox 8 ends with error:

Code:
apt dist-upgrade
Reading package lists... Done
Building dependency tree... Done
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/source/dkms.conf)
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/source/dkms.conf)
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/source/dkms.conf)
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/source/dkms.conf)
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/source/dkms.conf)
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/source/dkms.conf)
Deprecated feature: REMAKE_INITRD (/etc/dkms/framework.conf)
Sign command: /lib/modules/6.2.16-3-pve/build/scripts/sign-file
Signing key: /var/lib/dkms/mok.key
Public certificate (MOK): /var/lib/dkms/mok.pub
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/source/dkms.conf)

Building module:
Cleaning build area...(bad exit status: 2)
'make' DKMS_KERNEL_VERSION=6.2.16-3-pve.....(bad exit status: 2)
Error! Bad return status for module build on kernel: 6.2.16-3-pve (x86_64)
Consult /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/make.log for more information.
Error! One or more modules failed to install during autoinstall.
Refer to previous errors for more information.
dkms: autoinstall for kernel: 6.2.16-3-pve failed!
run-parts: /etc/kernel/postinst.d/dkms exited with return code 11
Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/pve-kernel-6.2.16-3-pve.postinst line 20.
dpkg: error processing package pve-kernel-6.2.16-3-pve (--configure):
 installed pve-kernel-6.2.16-3-pve package post-installation script subprocess returned error exit status 2
dpkg: dependency problems prevent configuration of pve-kernel-6.2:
 pve-kernel-6.2 depends on pve-kernel-6.2.16-3-pve; however:
  Package pve-kernel-6.2.16-3-pve is not configured yet.

dpkg: error processing package pve-kernel-6.2 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of proxmox-ve:
 proxmox-ve depends on pve-kernel-6.2; however:
  Package pve-kernel-6.2 is not configured yet.

dpkg: error processing package proxmox-ve (--configure):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 pve-kernel-6.2.16-3-pve
 pve-kernel-6.2
 proxmox-ve
E: Sub-process /usr/bin/dpkg returned an error code (1)

and

Code:
cat /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/make.log
DKMS make.log for iomemory-vsl-5.15.74-1-dbe5052 for kernel 6.2.16-3-pve (x86_64)
Mon Jul 10 16:10:35 CEST 2023
sed -i 's/Proprietary/GPL/g' Kbuild

Change found in target kernel: KERNELVER KERNEL_SRC
Running clean before building driver

make[1]: Entering directory '/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build'
make \
        -j48 \
    -C /lib/modules/6.2.16-3-pve/build \
    FIO_DRIVER_NAME=iomemory-vsl \
    FUSION_DRIVER_DIR=/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build \
    M=/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build \
    EXTRA_CFLAGS+="-I/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/include -DBUILDING_MODULE -DLINUX_IO_SCHED -Wall -Werror" \
    KFIO_LIB=kfio/x86_64_cc122_libkfio.o_shipped \
    clean
make[2]: Entering directory '/usr/src/linux-headers-6.2.16-3-pve'
make[2]: Leaving directory '/usr/src/linux-headers-6.2.16-3-pve'
rm -rf include/fio/port/linux/kfio_config.h kfio_config license.c
make[1]: Leaving directory '/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build'
if [ "122" -gt "74" ];then \
    if [ ! -f "kfio/x86_64_cc122_libkfio.o_shipped" ];then \
        cp kfio/x86_64_cc74_libkfio.o_shipped kfio/x86_64_cc122_libkfio.o_shipped; \
    fi \
fi
./kfio_config.sh -a x86_64 -o include/fio/port/linux/kfio_config.h -k /lib/modules/6.2.16-3-pve/build -p -d /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfio_config -l 0 -s /lib/modules/6.2.16-3-pve/source
Detecting Kernel Flags
Config dir         : /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfio_config
Output file        : include/fio/port/linux/kfio_config.h
Kernel output dir  : /lib/modules/6.2.16-3-pve/build
Kernel source dir  : /lib/modules/6.2.16-3-pve/build
Starting tests:
  1688998236.127  KFIOC_X_PROC_CREATE_DATA_WANTS_PROC_OPS...
  1688998236.298  KFIOC_X_TASK_HAS_CPUS_MASK...
  1688998236.469  KFIOC_X_LINUX_HAS_PART_STAT_H...
  1688998236.639  KFIOC_X_BLK_ALLOC_QUEUE_NODE_EXISTS...
  1688998236.814  KFIOC_X_BLK_ALLOC_DISK_EXISTS...
  1688998236.990  KFIOC_X_HAS_MAKE_REQUEST_FN...
  1688998237.100  KFIOC_X_GENHD_PART0_IS_A_POINTER...
  1688998237.266  KFIOC_X_BIO_HAS_BI_BDEV...
  1688998237.373  KFIOC_X_SUBMIT_BIO_RETURNS_BLK_QC_T...
  1688998237.526  KFIOC_X_VOID_ADD_DISK...
  1688998237.695  KFIOC_X_DISK_HAS_OPEN_MUTEX...
Started tests, waiting for completions...
  1688998238.906  KFIOC_X_PROC_CREATE_DATA_WANTS_PROC_OPS=1
  1688998238.933  KFIOC_X_TASK_HAS_CPUS_MASK=1
  1688998238.957  KFIOC_X_LINUX_HAS_PART_STAT_H=1
  1688998238.983  KFIOC_X_BLK_ALLOC_QUEUE_NODE_EXISTS=0
  1688998240.020  KFIOC_X_BLK_ALLOC_DISK_EXISTS=1
  1688998240.053  KFIOC_X_HAS_MAKE_REQUEST_FN=0
  1688998240.085  KFIOC_X_GENHD_PART0_IS_A_POINTER=1
  1688998240.118  KFIOC_X_BIO_HAS_BI_BDEV=1
  1688998240.150  KFIOC_X_SUBMIT_BIO_RETURNS_BLK_QC_T=0
  1688998240.182  KFIOC_X_VOID_ADD_DISK=0
  1688998240.214  KFIOC_X_DISK_HAS_OPEN_MUTEX=1
Finished
1688998240.549  Exiting
Preserving configdir due to '-p' option: /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfio_config
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
make \
    -j48 \
-C /lib/modules/6.2.16-3-pve/build \
FIO_DRIVER_NAME=iomemory-vsl \
FUSION_DRIVER_DIR=/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build \
M=/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build \
EXTRA_CFLAGS+="-I/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/include -DBUILDING_MODULE -DLINUX_IO_SCHED -Wall -Werror" \
INSTALL_MOD_DIR=extra/fio \
INSTALL_MOD_PATH= \
KFIO_LIB=kfio/x86_64_cc122_libkfio.o_shipped \
modules
make[1]: Entering directory '/usr/src/linux-headers-6.2.16-3-pve'
printf '#include "linux/module.h"\nMODULE_LICENSE("GPL");\n' >/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/license.c
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/main.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/pci.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/sysrq.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/driver_init.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfio.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/errno.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/state.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kcache.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfile.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kmem.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfio_common.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kcpu.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/ktime.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/sched.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/cdev.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kblock.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kcondvar.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kinfo.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kexports.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/khotplug.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kcsr.o
  COPY    /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfio/x86_64_cc122_libkfio.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/module_param.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/license.o
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfile.c: In function ‘kfio_inode_data’:
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfile.c:147:12: error: implicit declaration of function ‘PDE_DATA’; did you mean ‘NODE_DATA’? [-Werror=implicit-function-declaration]
  147 |     return PDE_DATA(ip);
      |            ^~~~~~~~
      |            NODE_DATA
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfile.c:147:12: error: returning ‘int’ from a function with return type ‘void *’ makes pointer from integer without a cast [-Werror=int-conversion]
  147 |     return PDE_DATA(ip);
      |            ^~~~~~~~~~~~
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/pci.c: In function ‘kfio_pci_set_dma_mask’:
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kblock.c:43:10: fatal error: linux/genhd.h: No such file or directory
   43 | #include <linux/genhd.h>
      |          ^~~~~~~~~~~~~~~
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/pci.c:528:12: error: implicit declaration of function ‘pci_set_dma_mask’; did you mean ‘kfio_pci_set_dma_mask’? [-Werror=implicit-function-declaration]
  528 |     return pci_set_dma_mask((struct pci_dev *)pdev, mask);
      |            ^~~~~~~~~~~~~~~~
      |            kfio_pci_set_dma_mask
compilation terminated.
make[2]: *** [scripts/Makefile.build:261: /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kblock.o] Error 1
make[2]: *** Waiting for unfinished jobs....
cc1: all warnings being treated as errors
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c: In function ‘kfio_sgl_dma_map’:
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c:341:9: error: implicit declaration of function ‘pci_map_sg’; did you mean ‘pci_map_rom’? [-Werror=implicit-function-declaration]
  341 |     i = pci_map_sg(lsg->pci_dev, lsg->sl, lsg->num_entries,
      |         ^~~~~~~~~~
      |         pci_map_rom
make[2]: *** [scripts/Makefile.build:260: /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfile.o] Error 1
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c:342:51: error: ‘PCI_DMA_FROMDEVICE’ undeclared (first use in this function); did you mean ‘DMA_FROM_DEVICE’?
  342 |                     dir == IODRIVE_DMA_DIR_READ ? PCI_DMA_FROMDEVICE : PCI_DMA_TODEVICE);
      |                                                   ^~~~~~~~~~~~~~~~~~
      |                                                   DMA_FROM_DEVICE
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c:342:51: note: each undeclared identifier is reported only once for each function it appears in
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c:342:72: error: ‘PCI_DMA_TODEVICE’ undeclared (first use in this function); did you mean ‘DMA_TO_DEVICE’?
  342 |                     dir == IODRIVE_DMA_DIR_READ ? PCI_DMA_FROMDEVICE : PCI_DMA_TODEVICE);
      |                                                                        ^~~~~~~~~~~~~~~~
      |                                                                        DMA_TO_DEVICE
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c: In function ‘kfio_sgl_dma_unmap’:
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c:388:9: error: implicit declaration of function ‘pci_unmap_sg’; did you mean ‘pci_unmap_rom’? [-Werror=implicit-function-declaration]
  388 |         pci_unmap_sg(lsg->pci_dev, lsg->sl, lsg->num_entries,
      |         ^~~~~~~~~~~~
      |         pci_unmap_rom
cc1: all warnings being treated as errors
make[2]: *** [scripts/Makefile.build:260: /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/pci.o] Error 1
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c:389:57: error: ‘PCI_DMA_FROMDEVICE’ undeclared (first use in this function); did you mean ‘DMA_FROM_DEVICE’?
  389 |                  lsg->pci_dir == IODRIVE_DMA_DIR_READ ? PCI_DMA_FROMDEVICE : PCI_DMA_TODEVICE);
      |                                                         ^~~~~~~~~~~~~~~~~~
      |                                                         DMA_FROM_DEVICE
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c:389:78: error: ‘PCI_DMA_TODEVICE’ undeclared (first use in this function); did you mean ‘DMA_TO_DEVICE’?
  389 |                  lsg->pci_dir == IODRIVE_DMA_DIR_READ ? PCI_DMA_FROMDEVICE : PCI_DMA_TODEVICE);
      |                                                                              ^~~~~~~~~~~~~~~~
      |                                                                              DMA_TO_DEVICE
cc1: all warnings being treated as errors
make[2]: *** [scripts/Makefile.build:260: /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.o] Error 1
make[1]: *** [Makefile:2026: /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build] Error 2
make[1]: Leaving directory '/usr/src/linux-headers-6.2.16-3-pve'
make: *** [Makefile:136: modules] Error 2

Jan
 
Last edited:
  • Like
Reactions: Deni74
hello,

any news about update to proxmox 8 / kernel 6.2?

trying to upgrade previously fuctional ioscale 3.2tb on proxmox 7 to proxmox 8 ends with error:

Code:
apt dist-upgrade
Reading package lists... Done
Building dependency tree... Done
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/source/dkms.conf)
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/source/dkms.conf)
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/source/dkms.conf)
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/source/dkms.conf)
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/source/dkms.conf)
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/source/dkms.conf)
Deprecated feature: REMAKE_INITRD (/etc/dkms/framework.conf)
Sign command: /lib/modules/6.2.16-3-pve/build/scripts/sign-file
Signing key: /var/lib/dkms/mok.key
Public certificate (MOK): /var/lib/dkms/mok.pub
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/source/dkms.conf)

Building module:
Cleaning build area...(bad exit status: 2)
'make' DKMS_KERNEL_VERSION=6.2.16-3-pve.....(bad exit status: 2)
Error! Bad return status for module build on kernel: 6.2.16-3-pve (x86_64)
Consult /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/make.log for more information.
Error! One or more modules failed to install during autoinstall.
Refer to previous errors for more information.
dkms: autoinstall for kernel: 6.2.16-3-pve failed!
run-parts: /etc/kernel/postinst.d/dkms exited with return code 11
Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/pve-kernel-6.2.16-3-pve.postinst line 20.
dpkg: error processing package pve-kernel-6.2.16-3-pve (--configure):
 installed pve-kernel-6.2.16-3-pve package post-installation script subprocess returned error exit status 2
dpkg: dependency problems prevent configuration of pve-kernel-6.2:
 pve-kernel-6.2 depends on pve-kernel-6.2.16-3-pve; however:
  Package pve-kernel-6.2.16-3-pve is not configured yet.

dpkg: error processing package pve-kernel-6.2 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of proxmox-ve:
 proxmox-ve depends on pve-kernel-6.2; however:
  Package pve-kernel-6.2 is not configured yet.

dpkg: error processing package proxmox-ve (--configure):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 pve-kernel-6.2.16-3-pve
 pve-kernel-6.2
 proxmox-ve
E: Sub-process /usr/bin/dpkg returned an error code (1)

and

Code:
cat /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/make.log
DKMS make.log for iomemory-vsl-5.15.74-1-dbe5052 for kernel 6.2.16-3-pve (x86_64)
Mon Jul 10 16:10:35 CEST 2023
sed -i 's/Proprietary/GPL/g' Kbuild

Change found in target kernel: KERNELVER KERNEL_SRC
Running clean before building driver

make[1]: Entering directory '/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build'
make \
        -j48 \
    -C /lib/modules/6.2.16-3-pve/build \
    FIO_DRIVER_NAME=iomemory-vsl \
    FUSION_DRIVER_DIR=/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build \
    M=/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build \
    EXTRA_CFLAGS+="-I/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/include -DBUILDING_MODULE -DLINUX_IO_SCHED -Wall -Werror" \
    KFIO_LIB=kfio/x86_64_cc122_libkfio.o_shipped \
    clean
make[2]: Entering directory '/usr/src/linux-headers-6.2.16-3-pve'
make[2]: Leaving directory '/usr/src/linux-headers-6.2.16-3-pve'
rm -rf include/fio/port/linux/kfio_config.h kfio_config license.c
make[1]: Leaving directory '/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build'
if [ "122" -gt "74" ];then \
    if [ ! -f "kfio/x86_64_cc122_libkfio.o_shipped" ];then \
        cp kfio/x86_64_cc74_libkfio.o_shipped kfio/x86_64_cc122_libkfio.o_shipped; \
    fi \
fi
./kfio_config.sh -a x86_64 -o include/fio/port/linux/kfio_config.h -k /lib/modules/6.2.16-3-pve/build -p -d /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfio_config -l 0 -s /lib/modules/6.2.16-3-pve/source
Detecting Kernel Flags
Config dir         : /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfio_config
Output file        : include/fio/port/linux/kfio_config.h
Kernel output dir  : /lib/modules/6.2.16-3-pve/build
Kernel source dir  : /lib/modules/6.2.16-3-pve/build
Starting tests:
  1688998236.127  KFIOC_X_PROC_CREATE_DATA_WANTS_PROC_OPS...
  1688998236.298  KFIOC_X_TASK_HAS_CPUS_MASK...
  1688998236.469  KFIOC_X_LINUX_HAS_PART_STAT_H...
  1688998236.639  KFIOC_X_BLK_ALLOC_QUEUE_NODE_EXISTS...
  1688998236.814  KFIOC_X_BLK_ALLOC_DISK_EXISTS...
  1688998236.990  KFIOC_X_HAS_MAKE_REQUEST_FN...
  1688998237.100  KFIOC_X_GENHD_PART0_IS_A_POINTER...
  1688998237.266  KFIOC_X_BIO_HAS_BI_BDEV...
  1688998237.373  KFIOC_X_SUBMIT_BIO_RETURNS_BLK_QC_T...
  1688998237.526  KFIOC_X_VOID_ADD_DISK...
  1688998237.695  KFIOC_X_DISK_HAS_OPEN_MUTEX...
Started tests, waiting for completions...
  1688998238.906  KFIOC_X_PROC_CREATE_DATA_WANTS_PROC_OPS=1
  1688998238.933  KFIOC_X_TASK_HAS_CPUS_MASK=1
  1688998238.957  KFIOC_X_LINUX_HAS_PART_STAT_H=1
  1688998238.983  KFIOC_X_BLK_ALLOC_QUEUE_NODE_EXISTS=0
  1688998240.020  KFIOC_X_BLK_ALLOC_DISK_EXISTS=1
  1688998240.053  KFIOC_X_HAS_MAKE_REQUEST_FN=0
  1688998240.085  KFIOC_X_GENHD_PART0_IS_A_POINTER=1
  1688998240.118  KFIOC_X_BIO_HAS_BI_BDEV=1
  1688998240.150  KFIOC_X_SUBMIT_BIO_RETURNS_BLK_QC_T=0
  1688998240.182  KFIOC_X_VOID_ADD_DISK=0
  1688998240.214  KFIOC_X_DISK_HAS_OPEN_MUTEX=1
Finished
1688998240.549  Exiting
Preserving configdir due to '-p' option: /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfio_config
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
make \
    -j48 \
-C /lib/modules/6.2.16-3-pve/build \
FIO_DRIVER_NAME=iomemory-vsl \
FUSION_DRIVER_DIR=/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build \
M=/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build \
EXTRA_CFLAGS+="-I/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/include -DBUILDING_MODULE -DLINUX_IO_SCHED -Wall -Werror" \
INSTALL_MOD_DIR=extra/fio \
INSTALL_MOD_PATH= \
KFIO_LIB=kfio/x86_64_cc122_libkfio.o_shipped \
modules
make[1]: Entering directory '/usr/src/linux-headers-6.2.16-3-pve'
printf '#include "linux/module.h"\nMODULE_LICENSE("GPL");\n' >/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/license.c
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/main.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/pci.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/sysrq.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/driver_init.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfio.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/errno.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/state.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kcache.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfile.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kmem.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfio_common.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kcpu.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/ktime.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/sched.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/cdev.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kblock.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kcondvar.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kinfo.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kexports.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/khotplug.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kcsr.o
  COPY    /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfio/x86_64_cc122_libkfio.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/module_param.o
  CC [M]  /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/license.o
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfile.c: In function ‘kfio_inode_data’:
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfile.c:147:12: error: implicit declaration of function ‘PDE_DATA’; did you mean ‘NODE_DATA’? [-Werror=implicit-function-declaration]
  147 |     return PDE_DATA(ip);
      |            ^~~~~~~~
      |            NODE_DATA
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfile.c:147:12: error: returning ‘int’ from a function with return type ‘void *’ makes pointer from integer without a cast [-Werror=int-conversion]
  147 |     return PDE_DATA(ip);
      |            ^~~~~~~~~~~~
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/pci.c: In function ‘kfio_pci_set_dma_mask’:
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kblock.c:43:10: fatal error: linux/genhd.h: No such file or directory
   43 | #include <linux/genhd.h>
      |          ^~~~~~~~~~~~~~~
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/pci.c:528:12: error: implicit declaration of function ‘pci_set_dma_mask’; did you mean ‘kfio_pci_set_dma_mask’? [-Werror=implicit-function-declaration]
  528 |     return pci_set_dma_mask((struct pci_dev *)pdev, mask);
      |            ^~~~~~~~~~~~~~~~
      |            kfio_pci_set_dma_mask
compilation terminated.
make[2]: *** [scripts/Makefile.build:261: /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kblock.o] Error 1
make[2]: *** Waiting for unfinished jobs....
cc1: all warnings being treated as errors
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c: In function ‘kfio_sgl_dma_map’:
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c:341:9: error: implicit declaration of function ‘pci_map_sg’; did you mean ‘pci_map_rom’? [-Werror=implicit-function-declaration]
  341 |     i = pci_map_sg(lsg->pci_dev, lsg->sl, lsg->num_entries,
      |         ^~~~~~~~~~
      |         pci_map_rom
make[2]: *** [scripts/Makefile.build:260: /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kfile.o] Error 1
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c:342:51: error: ‘PCI_DMA_FROMDEVICE’ undeclared (first use in this function); did you mean ‘DMA_FROM_DEVICE’?
  342 |                     dir == IODRIVE_DMA_DIR_READ ? PCI_DMA_FROMDEVICE : PCI_DMA_TODEVICE);
      |                                                   ^~~~~~~~~~~~~~~~~~
      |                                                   DMA_FROM_DEVICE
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c:342:51: note: each undeclared identifier is reported only once for each function it appears in
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c:342:72: error: ‘PCI_DMA_TODEVICE’ undeclared (first use in this function); did you mean ‘DMA_TO_DEVICE’?
  342 |                     dir == IODRIVE_DMA_DIR_READ ? PCI_DMA_FROMDEVICE : PCI_DMA_TODEVICE);
      |                                                                        ^~~~~~~~~~~~~~~~
      |                                                                        DMA_TO_DEVICE
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c: In function ‘kfio_sgl_dma_unmap’:
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c:388:9: error: implicit declaration of function ‘pci_unmap_sg’; did you mean ‘pci_unmap_rom’? [-Werror=implicit-function-declaration]
  388 |         pci_unmap_sg(lsg->pci_dev, lsg->sl, lsg->num_entries,
      |         ^~~~~~~~~~~~
      |         pci_unmap_rom
cc1: all warnings being treated as errors
make[2]: *** [scripts/Makefile.build:260: /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/pci.o] Error 1
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c:389:57: error: ‘PCI_DMA_FROMDEVICE’ undeclared (first use in this function); did you mean ‘DMA_FROM_DEVICE’?
  389 |                  lsg->pci_dir == IODRIVE_DMA_DIR_READ ? PCI_DMA_FROMDEVICE : PCI_DMA_TODEVICE);
      |                                                         ^~~~~~~~~~~~~~~~~~
      |                                                         DMA_FROM_DEVICE
/var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.c:389:78: error: ‘PCI_DMA_TODEVICE’ undeclared (first use in this function); did you mean ‘DMA_TO_DEVICE’?
  389 |                  lsg->pci_dir == IODRIVE_DMA_DIR_READ ? PCI_DMA_FROMDEVICE : PCI_DMA_TODEVICE);
      |                                                                              ^~~~~~~~~~~~~~~~
      |                                                                              DMA_TO_DEVICE
cc1: all warnings being treated as errors
make[2]: *** [scripts/Makefile.build:260: /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build/kscatter.o] Error 1
make[1]: *** [Makefile:2026: /var/lib/dkms/iomemory-vsl/5.15.74-1-dbe5052/build] Error 2
make[1]: Leaving directory '/usr/src/linux-headers-6.2.16-3-pve'
make: *** [Makefile:136: modules] Error 2

Jan

btw it works, when i stay using older 5.15 kernel:

Code:
Kernel Version Linux 5.15.108-1-pve #1 SMP PVE 5.15.108-1 (2023-06-17T09:41Z)
PVE Manager Version pve-manager/8.0.3/bbf3993334bfa91

Does anybody know, what can be problem to use 5.15 kernel with Proxmox 8?

Jan
 
Are you guys using the iomenory package from

https://www.dropbox.com/s/df06nuzvqndlvnk/iomemory-vsl-5.12.1.zip

or

https://github.com/RemixVSL/iomemory-vsl

The version from github works with the latest kernel proxmox 7 and the current kernel with proxmox 8. I have FusionIO drives running on both proxmox versions.
How did you get it to run on proxmox 8?
What kernel are you using?

I am currently on proxmox 7.4-17/513c62be (running kernel: 6.1.15-1-pve)

but i am afraid to upgrade ...
Installing them was a nightmare... and i don't want to start all that again...
 
Very good question. I just stumbled upton them on ebay last week and wanted to check them out, too. So having some reassurance would be awesome.
 
Performance wise i am quite happy but i am just a linux rookie so installation and upgrading makes me sweat :cool:
 
Does it mean ioDrive 2 doesn't work without special drivers, or is it needed to get most of the performance or SMART status?

I was thinking about buying those drives, because price is right.
But it seams that probably Sun/Oracle/LSI WarpDrives are better choice (or maybe I'm missing something what are you using it for, and there are benefits of ioDrives? Then please point me to them)
 
How did you get it to run on proxmox 8?
What kernel are you using?

I am currently on proxmox 7.4-17/513c62be (running kernel: 6.1.15-1-pve)

but i am afraid to upgrade ...
Installing them was a nightmare... and i don't want to start all that again...
The upgrade process is pretty easy. The iomemory-vsl for 7.4 does not compile for 8.x versions of proxmox. Before starting the 7 -> 8 proxmox upgrade process, remove the iomemory-vsl dkms module after shutting down anything using it. Next do the proxmox upgrade process to 8. Download the latest iomemory-vsl zip from https://github.com/RemixVSL/iomemory-vsl. Build, install, and load the dkms module from that zip. Starting up anything that was using the device: zpool, filesystems, lvm, etc.
 
The upgrade process is pretty easy. The iomemory-vsl for 7.4 does not compile for 8.x versions of proxmox. Before starting the 7 -> 8 proxmox upgrade process, remove the iomemory-vsl dkms module after shutting down anything using it. Next do the proxmox upgrade process to 8. Download the latest iomemory-vsl zip from https://github.com/RemixVSL/iomemory-vsl. Build, install, and load the dkms module from that zip. Starting up anything that was using the device: zpool, filesystems, lvm, etc.
wow, if this is whart you call pretty easy, i wonder what a complex upgrade loooks like for you :eek:

all my VM's and containers are on those disks, do i need to move them to another one?

I though maybe first upgrade the kernel to 6.2 or is your route a better one?

Does it mean ioDrive 2 doesn't work without special drivers, or is it needed to get most of the performance or SMART status?

I was thinking about buying those drives, because price is right.
But it seams that probably Sun/Oracle/LSI WarpDrives are better choice (or maybe I'm missing something what are you using it for, and there are benefits of ioDrives? Then please point me to them)
Yes, you need drivers...
https://github.com/RemixVSL/iomemory-vsl

The benefits is that they are faster then (consumer) SSD's and are less prune to wear. Consumer SSD's tend to wear very fast in server environments. And also they are quite cheap nowadays. I had two drives of 785GB per drive for about 70 bucks a drive,
 
Last edited:
I didn't have to do anything when I upgraded to PVE 8, or the new 6.x kernel. My machines, containers, etc are on FIO and I was reluctant after reading this thread - but the module built without issue for me. :p

Last login: Wed Nov 8 09:07:58 2023 root@pve1:~# dkms status Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/3.2.16/source/dkms.conf) Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.13.19-6-a727d9d/source/dkms.conf) Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.107-2-5dcda15/source/dkms.conf) Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.107-2-5dcda15/source/dkms.conf) Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.107-2-5dcda15/source/dkms.conf) iomemory-vsl/3.2.16, 5.13.19-6-pve, x86_64: built iomemory-vsl/5.13.19-6-a727d9d, 5.15.107-2-pve, x86_64: built iomemory-vsl/5.15.107-2-5dcda15, 5.15.108-1-pve, x86_64: built iomemory-vsl/5.15.107-2-5dcda15, 6.2.16-18-pve, x86_64: installed iomemory-vsl/5.15.107-2-5dcda15, 6.2.16-19-pve, x86_64: installed root@pve1:~# uname -a Linux pve1 6.2.16-18-pve #1 SMP PREEMPT_DYNAMIC PMX 6.2.16-18 (2023-10-11T15:05Z) x86_64 GNU/Linux root@pve1:~#
 
wow, if this is whart you call pretty easy, i wonder what a complex upgrade loooks like for you :eek:

all my VM's and containers are on those disks, do i need to move them to another one?

I though maybe first upgrade the kernel to 6.2 or is your route a better one?


Yes, you need drivers...
https://github.com/RemixVSL/iomemory-vsl

The benefits is that they are faster then (consumer) SSD's and are less prune to wear. Consumer SSD's tend to wear very fast in server environments. And also they are quite cheap nowadays. I had two drives of 785GB per drive for about 70 bucks a drive,
I understand that. But all LSI/Seagate Nytro WarpDrives (also rebranded as Sun F80) are also MLC, industrial grade and 800GB for 70bucks, while not requiring shenanigans with custom drivers, etc (so you can not take any random rescuecd, like ArchISO, Medicat, etc to repair system or recover data)

Anyway I'm really glad they don't go to landfills (like most of hardware do, after few years).
 
I understand that. But all LSI/Seagate Nytro WarpDrives (also rebranded as Sun F80) are also MLC, industrial grade and 800GB for 70bucks, while not requiring shenanigans with custom drivers, etc (so you can not take any random rescuecd, like ArchISO, Medicat, etc to repair system or recover data)
Thank you for the input. I also wanted to buy some drives to play around with and having no kernel module to compile is a big plus.

Can you share some performance numbers of those drives?
 
I didn't have to do anything when I upgraded to PVE 8, or the new 6.x kernel. My machines, containers, etc are on FIO and I was reluctant after reading this thread - but the module built without issue for me. :p

Last login: Wed Nov 8 09:07:58 2023 root@pve1:~# dkms status Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/3.2.16/source/dkms.conf) Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.13.19-6-a727d9d/source/dkms.conf) Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.107-2-5dcda15/source/dkms.conf) Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.107-2-5dcda15/source/dkms.conf) Deprecated feature: REMAKE_INITRD (/var/lib/dkms/iomemory-vsl/5.15.107-2-5dcda15/source/dkms.conf) iomemory-vsl/3.2.16, 5.13.19-6-pve, x86_64: built iomemory-vsl/5.13.19-6-a727d9d, 5.15.107-2-pve, x86_64: built iomemory-vsl/5.15.107-2-5dcda15, 5.15.108-1-pve, x86_64: built iomemory-vsl/5.15.107-2-5dcda15, 6.2.16-18-pve, x86_64: installed iomemory-vsl/5.15.107-2-5dcda15, 6.2.16-19-pve, x86_64: installed root@pve1:~# uname -a Linux pve1 6.2.16-18-pve #1 SMP PREEMPT_DYNAMIC PMX 6.2.16-18 (2023-10-11T15:05Z) x86_64 GNU/Linux root@pve1:~#
Did you not uninstall the dkms module first?
 
No? Is that a thing? I thought the point of DKMS modules was they recompile / install for each kernel automatically?
Yes, but someone mentioned in this topic that it did not recompile with the new kernel.


As a test i just performed the upgrade without any other precautions and it worked fine!

My compliments to the Proxmox team!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!