[TUTORIAL] Configuring Fusion-Io (SanDisk) ioDrive, ioDrive2, ioScale and ioScale2 cards with Proxmox

Forum:
And today this is the reply I received from Western Digital:

We have checked on your query with our engineering team related to ioScale 2 support for Debian 10 OS.

Unfortunately ioMemory at this stage in its life cycle has no plans to add support for Debian 10 “Buster” as it is considered a major OS update. Currently, we are only adding support for minor OS updates of OS’s already supported.

Kindly let me know if you have further query on this case.

Does this mean Fusion-io Scale 2 usage stops with Debian 9/PVE 5?
 
support case filled, hopefully it helpsand wdc update the driver some time..

=================
Hello Jan,

Kindly be informed that we haven't tested the IOmemory-vsl drivers to work with Debian 10 and only the Debian 9(stretch) is supported as of now.

I'll keep you posted if there is any future updates on this,
About Proxmox Virtual Environment (Debian10) we have no info about it as it seems to be incompatible with the IO Memory.

Regards,
Ajay Reddy
WDC Enterprise Support
 
  • Like
Reactions: Vladimir Bulgaru
@Jan Panoch
Hm, the standardised reply seems to have shifted from "sorry, no, it won't happen" to "we're looking into it" ;)
A bit early to celebrate, but we may have made them rethink the driver aspect.

as you allready wrote - when multiple people start to ask for iomemory driver for debian 10, they maybe change her mind..
 
Hey all,
I greatly appreciate your work in getting these cards working with proxmox - I've been getting tired of being forced to use ESXi and Hyper-V. I have a 1.25TB ioDrive2 that I'm trying to get working with an HP z420 and a fresh install of Proxmox 5.4. I've tried a few times, but every time I try and load the module, I get:

Code:
modprobe: ERROR: could not insert 'iomemory_vsl': Invalid argument

If I check dmesg, I see a lot of this:

Code:
[  493.314017] iomemory_vsl: disagrees about version of symbol pci_enable_device
[  493.314020] iomemory_vsl: Unknown symbol pci_enable_device (err -22)
[  493.314041] iomemory_vsl: disagrees about version of symbol pci_dev_put
[  493.314042] iomemory_vsl: Unknown symbol pci_dev_put (err -22)
[  493.314051] iomemory_vsl: disagrees about version of symbol pci_get_device
[  493.314052] iomemory_vsl: Unknown symbol pci_get_device (err -22)
[  493.314058] iomemory_vsl: disagrees about version of symbol __pci_register_driver
[  493.314059] iomemory_vsl: Unknown symbol __pci_register_driver (err -22)
[  493.314065] iomemory_vsl: disagrees about version of symbol pci_disable_msi
[  493.314066] iomemory_vsl: Unknown symbol pci_disable_msi (err -22)
[  493.314071] iomemory_vsl: disagrees about version of symbol pci_request_regions
[  493.314072] iomemory_vsl: Unknown symbol pci_request_regions (err -22)
[  493.314100] iomemory_vsl: disagrees about version of symbol pci_unregister_driver
[  493.314101] iomemory_vsl: Unknown symbol pci_unregister_driver (err -22)
[  493.314111] iomemory_vsl: disagrees about version of symbol pci_read_config_dword
[  493.314112] iomemory_vsl: Unknown symbol pci_read_config_dword (err -22)
[  493.314134] iomemory_vsl: disagrees about version of symbol pci_enable_msix_range
[  493.314135] iomemory_vsl: Unknown symbol pci_enable_msix_range (err -22)
[  493.314151] iomemory_vsl: disagrees about version of symbol pci_find_capability
[  493.314152] iomemory_vsl: Unknown symbol pci_find_capability (err -22)
[  493.314158] iomemory_vsl: disagrees about version of symbol pci_enable_msi
[  493.314159] iomemory_vsl: Unknown symbol pci_enable_msi (err -22)
[  493.314176] iomemory_vsl: disagrees about version of symbol pci_read_config_word
[  493.314177] iomemory_vsl: Unknown symbol pci_read_config_word (err -22)
[  493.314254] iomemory_vsl: disagrees about version of symbol pci_set_master
[  493.314258] iomemory_vsl: Unknown symbol pci_set_master (err -22)
[  493.314288] iomemory_vsl: disagrees about version of symbol pci_release_regions
[  493.314289] iomemory_vsl: Unknown symbol pci_release_regions (err -22)
[  493.314291] iomemory_vsl: disagrees about version of symbol pci_write_config_byte
[  493.314291] iomemory_vsl: Unknown symbol pci_write_config_byte (err -22)
[  493.314300] iomemory_vsl: disagrees about version of symbol pci_disable_msix
[  493.314301] iomemory_vsl: Unknown symbol pci_disable_msix (err -22)
[  493.314302] iomemory_vsl: disagrees about version of symbol pci_disable_device
[  493.314303] iomemory_vsl: Unknown symbol pci_disable_device (err -22)
[  493.314311] iomemory_vsl: disagrees about version of symbol pci_read_config_byte
[  493.314313] iomemory_vsl: Unknown symbol pci_read_config_byte (err -22)
[  493.314320] iomemory_vsl: disagrees about version of symbol pci_write_config_word
[  493.314321] iomemory_vsl: Unknown symbol pci_write_config_word (err -22)
[  493.314329] iomemory_vsl: disagrees about version of symbol pci_write_config_dword
[  493.314330] iomemory_vsl: Unknown symbol pci_write_config_dword (err -22)

Here's the output of fio-status -a

Code:
Found 1 ioMemory device in this system
Driver version: Driver not loaded

Adapter: Single Controller Adapter
        Fusion-io ioDrive2 1.205TB, Product Number:F00-001-1T20-CS-0001, SN:1211D2433, FIO SN:1211D2433
        ioDrive2 Adapter Controller, PN:PA004137009
        External Power: NOT connected
        PCIe Power limit threshold: Disabled
        PCIe slot available power: 25.00W
        PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total
        Connected ioMemory modules:
          06:00.0:      Product Number:F00-001-1T20-CS-0001, SN:1211D2433

06:00.0 ioDrive2 Adapter Controller, Product Number:F00-001-1T20-CS-0001, SN:1211D2433
        ioDrive2 Adapter Controller, PN:PA004137009
        SMP(AVR) Versions: App Version: 1.0.35.0, Boot Version: 0.0.9.1
        PCI:06:00.0
        Vendor:1aed, Device:2001, Sub vendor:1aed, Sub device:2001
        Firmware v7.1.17, rev 116786 Public
        PCIe slot available power: 25.00W
        PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total
        Internal temperature: 58.57 degC, max 59.55 degC
        Internal voltage: avg 1.02V, max 1.02V
        Aux voltage: avg 2.49V, max 2.49V

Any suggestions would be greatly appreciated! I'm going to try moving the card to other slots, and I do have some other fusionIO cards in other systems I can pull if necessary, though I know this one is working.
 
Sure, I can open it up for a little bit if you want to take a look. I don't have anything on it. I'll send you a PM shortly.
 
@john8520

Fixed. Just a quick update for the others, in case you encounter this issue:
  1. make sure your repositories are set correctly. Default Proxmox repositories can prevent a proper update. Add the no-subscription repository and remove the enterprise one in case you don't have the subscription;
  2. for some reason the drivers were not loading for kernel pve-4.15.18-12-pve. With the latest kernel everything works great;
  3. there is a command to compile drivers for all kernel versions in case you have several installed (good way to make sure your drivers are ready):
    Code:
    ls /var/lib/initramfs-tools | sudo xargs -n1 /usr/lib/dkms/dkms_autoinstaller start
 
  • Like
Reactions: Psilospiral
@john8520
for some reason the drivers were not loading for kernel pve-4.15.18-12-pve. With the latest kernel everything works great;
I ran into this exact situation after completing a fresh install of PVE 5.4 (that I just downloaded) while testing another Fusion-IO Scale2 card I picked up. After installing PVE 5.4, I had kernel pve-4.15.18-12 and received:
Code:
Backing up initrd.img-4.15.18-12-pve to /boot/initrd.img-4.15.18-12-pve.old-dkms
Making new initrd.img-4.15.18-12-pve
(If next boot fails, revert to initrd.img-4.15.18-12-pve.old-dkms image)
update-initramfs.............

DKMS: install completed.
modprobe: ERROR: could not insert 'iomemory_vsl': Invalid argument

After issuing 'apt-get dist-upgrade' and rebooting, my kernel version is now pve-4.15.18-20, and the installation completes without errors. Thank you for posting this!
 
Hi Vladimir @Vladimir Bulgaru

Just wanted to say thank you, after a few errors with dkms I got it to work with 'apt-get dist-upgrade' !

Anybody already got any issues with updates?

Regards Shaun
 
Last edited:
Hello,
I have an HP 320GB SLC PCIe ioDrive Duo for ProLiant Servers.
I am using kernel 5.0.21-5-pve and have made a successful compilation of the module.

Whether it works stable, I still do not know.

Here's what I did:

root@nasa:~# git clone https://github.com/snuf/iomemory-vsl
Cloning into 'iomemory-vsl'...
remote: Enumerating objects: 251, done.
remote: Counting objects: 100% (251/251), done.
remote: Compressing objects: 100% (121/121), done.
remote: Total 1375 (delta 129), reused 184 (delta 88), pack-reused 1124
Receiving objects: 100% (1375/1375), 10.47 MiB | 10.89 MiB/s, done.
Resolving deltas: 100% (713/713), done.

root@nasa:~# cd iomemory-vsl
root@nasa:~/iomemory-vsl#

root@nasa:~/iomemory-vsl# git checkout 5.1.28
Branch '5.1.28' set up to track remote branch '5.1.28' from 'origin'.
Switched to a new branch '5.1.28'

root@nasa:~/iomemory-vsl# cp -r root/usr/src/iomemory-vsl-3.2.16 /usr/src/
root@nasa:~/iomemory-vsl# mkdir -p /var/lib/dkms/iomemory-vsl/3.2.16/build
root@nasa:~/iomemory-vsl# ln -s /usr/src/iomemory-vsl-3.2.16 /var/lib/dkms/iomemory-vsl/3.2.16/source

root@nasa:~/iomemory-vsl# dkms build -m iomemory-vsl -v 3.2.16

Kernel preparation unnecessary for this kernel. Skipping...

Building module:
cleaning build area...
'make' DKMS_KERNEL_VERSION=5.0.21-5-pve..........
cleaning build area...

DKMS: build completed.


root@nasa:~/iomemory-vsl# dkms install -m iomemory-vsl -v 3.2.16

iomemory-vsl.ko:
Running module version sanity check.
- Original module
- Installation
- Installing to /lib/modules/5.0.21-5-pve/updates/dkms/

depmod...

Backing up initrd.img-5.0.21-5-pve to /boot/initrd.img-5.0.21-5-pve.old-dkms
Making new initrd.img-5.0.21-5-pve
(If next boot fails, revert to initrd.img-5.0.21-5-pve.old-dkms image)
update-initramfs.....

DKMS: install completed

root@nasa:~/iomemory-vsl# modprobe iomemory-vsl
root@nasa:~/iomemory-vsl# fio-status

Found 2 ioMemory devices in this system with 1 ioDrive Duo
Driver version: 3.2.16 build 1731

Adapter: Dual Adapter
HP 320GB SLC PCIe ioDrive Duo for ProLiant Servers, Product Number:600281-B21, SN:33868
External Power: NOT connected
PCIe Power limit threshold: 24.75W
Connected ioMemory modules:
fct0: Product Number:600281-B21, SN:70132
fct1: Product Number:600281-B21, SN:70184

fct0 Attached
HP ioDIMM 160GB, Product Number:600281-B21, SN:70132
Located in slot 0 Upper of ioDrive Duo HL SN:33868
Last Power Monitor Incident: 41 sec
PCI:07:00.0, Slot Number:5
Firmware v7.1.17, rev 116786 Public
160.00 GBytes device size
Internal temperature: 47.74 degC, max 48.23 degC
Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
Contained VSUs:
fioa: ID:0, UUID:af073a36-822b-ab4b-9a02-8b5b5af259fd

fioa State: Online, Type: block device
ID:0, UUID:af073a36-822b-ab4b-9a02-8b5b5af259fd
160.00 GBytes device size

fct1 Attached
HP ioDIMM 160GB, Product Number:600281-B21, SN:70184
Located in slot 1 Lower of ioDrive Duo HL SN:33868
PCI:08:00.0, Slot Number:5
Firmware v7.1.17, rev 116786 Public
160.00 GBytes device size
Internal temperature: 50.69 degC, max 51.19 degC
Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
Contained VSUs:
fiob: ID:0, UUID:cdfe5086-fa52-8143-ae42-12a8834983f0

fiob State: Online, Type: block device
ID:0, UUID:cdfe5086-fa52-8143-ae42-12a8834983f0
160.00 GBytes device size

root@nasa:~/iomemory-vsl#
 
hello,

i found that 4.15.18-23-pve works, but newer 4.15.18-24-pve doesn't work:

Code:
root@pve53:~# dkms build -m iomemory-vsl -v 3.2.16

Kernel preparation unnecessary for this kernel.  Skipping...

Building module:
cleaning build area...
'make' DKMS_KERNEL_VERSION=4.15.18-24-pve................(bad exit status: 2)
Error! Bad return status for module build on kernel: 4.15.18-24-pve (x86_64)
Consult /var/lib/dkms/iomemory-vsl/3.2.16/build/make.log for more information.
root@pve53:~# cat /var/lib/dkms/iomemory-vsl/3.2.16/build/make.log
DKMS make.log for iomemory-vsl-3.2.16 for kernel 4.15.18-24-pve (x86_64)
Thu Dec 12 00:59:35 CET 2019
./kfio_config.sh -a x86_64 -o include/fio/port/linux/kfio_config.h -k /lib/modules/4.15.18-24-pve/build -p -d /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio_config -l 0
Detecting Kernel Flags
Config dir         : /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio_config
Output file        : include/fio/port/linux/kfio_config.h
Kernel output dir  : /lib/modules/4.15.18-24-pve/build
Kernel source dir  :
Starting tests:
  1576108775.652  KFIOC_MISSING_WORK_FUNC_T...
  1576108775.653  KFIOC_WORKDATA_PASSED_IN_WORKSTRUCT...
  1576108775.654  KFIOC_HAS_PCI_ERROR_HANDLERS...
...
  1576108814.187  KFIOC_ELEVATOR_EXIT_HAS_REQQ_PARAM=1
  1576108814.208  KFIOC_HAS_BLK_RQ_IS_PASSTHROUGH=1
  1576108814.230  KFIOC_HAS_BLK_QUEUE_BOUNCE=0
  1576108814.251  KFIOC_HAS_BLK_QUEUE_SPLIT2=1
Finished
1576108814.260  Exiting
Preserving configdir due to '-p' option: /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio_config
make \
    -j4 \
-C /lib/modules/4.15.18-24-pve/build \
FIO_DRIVER_NAME=iomemory-vsl \
FIO_SCSI_DEVICE=0 \
FUSION_DRIVER_DIR=/var/lib/dkms/iomemory-vsl/3.2.16/build \
SUBDIRS=/var/lib/dkms/iomemory-vsl/3.2.16/build \
EXTRA_CFLAGS+="-I/var/lib/dkms/iomemory-vsl/3.2.16/build/include -DBUILDING_MODULE -DLINUX_IO_SCHED" \
INSTALL_MOD_DIR=extra/fio \
INSTALL_MOD_PATH= \
KFIO_LIB=kfio/x86_64_cc63_libkfio.o_shipped \
modules
make[1]: Entering directory '/usr/src/linux-headers-4.15.18-24-pve'
printf '#include "linux/module.h"\nMODULE_LICENSE("Proprietary");\n' >/var/lib/dkms/iomemory-vsl/3.2.16/build/license.c
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/main.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/pci.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/driver_init.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/sysrq.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/errno.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/state.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kcache.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kfile.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kmem.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kmisc.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kscatter.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/ktime.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/sched.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/cdev.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kcondvar.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kinfo.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kexports.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/khotplug.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kcsr.o
  SHIPPED /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio/x86_64_cc63_libkfio.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/module_param.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/license.o
  LD [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/iomemory-vsl.o
  Building modules, stage 2.
  MODPOST 1 modules
WARNING: could not find /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio/.x86_64_cc63_libkfio.o.cmd for /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio/x86_64_cc63_libkfio.o
FATAL: modpost: GPL-incompatible module iomemory-vsl.ko uses GPL-only symbol 'ktime_get_real_seconds'
scripts/Makefile.modpost:92: recipe for target '__modpost' failed
make[2]: *** [__modpost] Error 1
Makefile:1583: recipe for target 'modules' failed
make[1]: *** [modules] Error 2
make[1]: Leaving directory '/usr/src/linux-headers-4.15.18-24-pve'
Makefile:82: recipe for target 'modules' failed
make: *** [modules] Error 2
root@pve53:~#

Bye

Jan
 
hello,

i found that 4.15.18-23-pve works, but newer 4.15.18-24-pve doesn't work:

Code:
root@pve53:~# dkms build -m iomemory-vsl -v 3.2.16

Kernel preparation unnecessary for this kernel.  Skipping...

Building module:
cleaning build area...
'make' DKMS_KERNEL_VERSION=4.15.18-24-pve................(bad exit status: 2)
Error! Bad return status for module build on kernel: 4.15.18-24-pve (x86_64)
Consult /var/lib/dkms/iomemory-vsl/3.2.16/build/make.log for more information.
root@pve53:~# cat /var/lib/dkms/iomemory-vsl/3.2.16/build/make.log
DKMS make.log for iomemory-vsl-3.2.16 for kernel 4.15.18-24-pve (x86_64)
Thu Dec 12 00:59:35 CET 2019
./kfio_config.sh -a x86_64 -o include/fio/port/linux/kfio_config.h -k /lib/modules/4.15.18-24-pve/build -p -d /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio_config -l 0
Detecting Kernel Flags
Config dir         : /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio_config
Output file        : include/fio/port/linux/kfio_config.h
Kernel output dir  : /lib/modules/4.15.18-24-pve/build
Kernel source dir  :
Starting tests:
  1576108775.652  KFIOC_MISSING_WORK_FUNC_T...
  1576108775.653  KFIOC_WORKDATA_PASSED_IN_WORKSTRUCT...
  1576108775.654  KFIOC_HAS_PCI_ERROR_HANDLERS...
...
  1576108814.187  KFIOC_ELEVATOR_EXIT_HAS_REQQ_PARAM=1
  1576108814.208  KFIOC_HAS_BLK_RQ_IS_PASSTHROUGH=1
  1576108814.230  KFIOC_HAS_BLK_QUEUE_BOUNCE=0
  1576108814.251  KFIOC_HAS_BLK_QUEUE_SPLIT2=1
Finished
1576108814.260  Exiting
Preserving configdir due to '-p' option: /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio_config
make \
    -j4 \
-C /lib/modules/4.15.18-24-pve/build \
FIO_DRIVER_NAME=iomemory-vsl \
FIO_SCSI_DEVICE=0 \
FUSION_DRIVER_DIR=/var/lib/dkms/iomemory-vsl/3.2.16/build \
SUBDIRS=/var/lib/dkms/iomemory-vsl/3.2.16/build \
EXTRA_CFLAGS+="-I/var/lib/dkms/iomemory-vsl/3.2.16/build/include -DBUILDING_MODULE -DLINUX_IO_SCHED" \
INSTALL_MOD_DIR=extra/fio \
INSTALL_MOD_PATH= \
KFIO_LIB=kfio/x86_64_cc63_libkfio.o_shipped \
modules
make[1]: Entering directory '/usr/src/linux-headers-4.15.18-24-pve'
printf '#include "linux/module.h"\nMODULE_LICENSE("Proprietary");\n' >/var/lib/dkms/iomemory-vsl/3.2.16/build/license.c
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/main.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/pci.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/driver_init.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/sysrq.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/errno.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/state.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kcache.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kfile.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kmem.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kmisc.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kscatter.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/ktime.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/sched.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/cdev.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kcondvar.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kinfo.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kexports.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/khotplug.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kcsr.o
  SHIPPED /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio/x86_64_cc63_libkfio.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/module_param.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/license.o
  LD [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/iomemory-vsl.o
  Building modules, stage 2.
  MODPOST 1 modules
WARNING: could not find /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio/.x86_64_cc63_libkfio.o.cmd for /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio/x86_64_cc63_libkfio.o
FATAL: modpost: GPL-incompatible module iomemory-vsl.ko uses GPL-only symbol 'ktime_get_real_seconds'
scripts/Makefile.modpost:92: recipe for target '__modpost' failed
make[2]: *** [__modpost] Error 1
Makefile:1583: recipe for target 'modules' failed
make[1]: *** [modules] Error 2
make[1]: Leaving directory '/usr/src/linux-headers-4.15.18-24-pve'
Makefile:82: recipe for target 'modules' failed
make: *** [modules] Error 2
root@pve53:~#

Bye

Jan
Hello!

Indeed this is correct and seems to be a kernel bug, since there is no reason for the drivers to be compatible with all the previous kernels and fail on this one. cc @fabian
 
Hello!

Indeed this is correct and seems to be a kernel bug, since there is no reason for the drivers to be compatible with all the previous kernels and fail on this one. cc @fabian

there is nothing we can do here? the kernel ABI changes over time, even within stable releases. out-of-tree modules need to stay in sync, or they might not compile anymore..
 
Currently working on a solution to port ioDrive2 devices to Proxmox 6.1. The last issue to solve is the volume group activation after boot, mentioned here. If the issue is solved, the solution will be ready for testing in production environments. I guess we all want a piece of that Proxmox 6 action :P
 
  • Like
Reactions: Jan Panoch
Yeah, they don't work in Proxmox 6 :(

root@pve1:~# fio-status -a

Found 1 ioMemory device in this system
Driver version: Driver not loaded

Adapter: ioMono
Fusion-io 1.65TB ioScale2, Product Number:F11-003-1T65-CS-0001, SN:1308G0848, FIO SN:1308G0848
ioDrive2 Adapter Controller, PN:PA005004003
External Power: NOT connected
PCIe Power limit threshold: Disabled
PCIe slot available power: 25.00W
PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total
Connected ioMemory modules:
41:00.0: Product Number:F11-003-1T65-CS-0001, SN:1308G0848

41:00.0 ioDrive2 Adapter Controller, Product Number:F11-003-1T65-CS-0001, SN:1308G0848
ioDrive2 Adapter Controller, PN:PA005004003
SMP(AVR) Versions: App Version: 1.0.15.0, Boot Version: 1.0.4.1
PCI:41:00.0
Vendor:1aed, Device:2001, Sub vendor:1aed, Sub device:2001
Firmware v7.1.15, rev 110356 Public
PCIe slot available power: 25.00W
PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total
Internal temperature: 49.22 degC, max 49.71 degC
Internal voltage: avg 1.01V, max 1.01V
Aux voltage: avg 2.49V, max 2.49V
 
Hi @Vladimir Bulgaru I'm also desperately looking for a backport of the ioDrive2 drivers to Proxmox 6.1. Since I don't use LVM on them would you mind to let me test your solution? Thanks and Merry Christmas!! :)
 
Hi @Vladimir Bulgaru I'm also desperately looking for a backport of the ioDrive2 drivers to Proxmox 6.1. Since I don't use LVM on them would you mind to let me test your solution? Thanks and Merry Christmas!! :)
Hey!

Simply install these drivers: https://github.com/snuf/iomemory-vsl/tree/5.1.28.
Make sure you use specifically drivers from this branch (5.1.28).
The only thing you will have to adjust is (assuming you've downloaded the drivers to /home/temp directory):
Code:
cp -r /home/temp/iomemory-vsl-5.1.28/root/usr/src/iomemory-vsl-3.2.16 /usr/src/ && \
mkdir -p /var/lib/dkms/iomemory-vsl/3.2.16/build && \
ln -s /usr/src/iomemory-vsl-3.2.16 /var/lib/dkms/iomemory-vsl/3.2.16/source && \
dkms build -m iomemory-vsl -v 3.2.16 && \
dkms install -m iomemory-vsl -v 3.2.16 && \
modprobe iomemory-vsl
 
Thanks but unfortunately while I was successfully able to compile the 5.1.28 branch of snuf's iomemory-vsl fork, it wont' recognize my SX350-3200 ioDrive2, I think that version of the driver is for ioDrive gen. 1 only. Right now I've reverted back to kernel 4.15.18-24-pve where I can get the stock iomemory_vsl4 4.3.6 to work without problems, while I wait for WD to port their driver to 5.x kernels.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!