[TUTORIAL] Configuring Fusion-Io (SanDisk) ioDrive, ioDrive2, ioScale and ioScale2 cards with Proxmox

Hello Vladimir,
thank you!
I have followed your How To and unfortunately I still have errors. Could you help?
I have 2 old ioDrive2 3000GB mit Firmware version 7.1.17 116786.

I do all the steps and then I run fio-status -a. It gave back

Code:
Driver version: 3.2.16 build 1731


Adapter: Single Controller Adapter

        Fusion-io ioDrive2 3000GB, Product Number:778DW, SN:US0778DW760513C60021

        ioDrive2 Adapter Controller, PN:8Y0YT

        External Power: NOT connected

        PCIe Power limit threshold: 24.75W

        PCIe slot available power: 75.00W

        PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total

        Connected ioMemory modules:

          fct0: Product Number:778DW, SN:1339D058E


fct0    Status unknown: Driver is in MINIMAL MODE:

                Device has a hardware failure

        ioDrive2 Adapter Controller, Product Number:778DW, SN:1339D058E

!! ---> There are active errors or warnings on this device!  Read below for details.

        ioDrive2 Adapter Controller, PN:8Y0YT

        SMP(AVR) Versions: App Version: 1.0.21.0, Boot Version: 1.0.6.1

        Located in slot 0 Center of ioDrive2 Adapter Controller SN:1339D058E

        Powerloss protection: not available

        PCI:41:00.0

        Vendor:1aed, Device:2001, Sub vendor:1028, Sub device:1f7d

        Firmware v7.1.17, rev 116786 Public

        Geometry and capacity information not available.

        Format: not low-level formatted

        PCIe slot available power: 75.00W

        PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total

        Internal temperature: 43.31 degC, max 43.31 degC

        Internal voltage: avg 1.01V, max 1.02V

        Aux voltage: avg 2.48V, max 2.48V

        Rated PBW: 37.00 PB

        Lifetime data volumes:

           Physical bytes written: 0

           Physical bytes read   : 0

        RAM usage:

           Current: 0 bytes

           Peak   : 0 bytes


        ACTIVE WARNINGS:

            The ioMemory is currently running in a minimal state.


I see in the /var/log/kernel and I found this.


Code:
pve kernel: [   15.869275] <6>fioinf ioDrive 0000:01:00.0: mapping controller on BAR 5
pve kernel: [   15.870453] <6>fioinf ioDrive 0000:01:00.0: MSI enabled
pve kernel: [   15.871292] <6>fioinf ioDrive 0000:01:00.0: using MSI interrupts
pve kernel: [   15.901960] <6>fioinf ioDrive 0000:01:00.0.0: Starting master controller
pve kernel: [   15.983945] <6>fioinf ioDrive 0000:01:00.0.0: PMP Address: 1 1 1
pve kernel: [   16.207996] <6>fioinf ioDrive 0000:01:00.0.0: SMP Controller Firmware APP  version 1.0.21 0
pve kernel: [   16.209326] <6>fioinf ioDrive 0000:01:00.0.0: SMP Controller Firmware BOOT version 1.0.6 1
pve kernel: [   16.796194] <6>fioinf ioDrive 0000:01:00.0.0: Required PCIE bandwidth 2.000 GBytes per sec
pve kernel: [   16.797470] <6>fioinf ioDrive 0000:01:00.0.0: Board serial number is 1339D0ECB
pve kernel: [   16.798249] <6>fioinf ioDrive 0000:01:00.0.0: Adapter serial number is 1339D0ECB
pve kernel: [   16.799018] <6>fioinf ioDrive 0000:01:00.0.0: Default capacity        3000.000 GBytes
pve kernel: [   16.799760] <6>fioinf ioDrive 0000:01:00.0.0: Default sector size     512 bytes
pve kernel: [   16.800492] <6>fioinf ioDrive 0000:01:00.0.0: Rated endurance         37.00 PBytes
pve kernel: [   16.801210] <6>fioinf ioDrive 0000:01:00.0.0: 100C temp range hardware found
pve kernel: [   16.801924] <6>fioinf ioDrive 0000:01:00.0.0: Maximum capacity        3200.000 GBytes
pve kernel: [   17.672205] <6>fioinf ioDrive 0000:01:00.0.0: Firmware version 7.1.17 116786 (0x700411 0x1c832)
pve kernel: [   17.673614] <6>fioinf ioDrive 0000:01:00.0.0: Platform version 20
pve kernel: [   17.674369] <6>fioinf ioDrive 0000:01:00.0.0: Firmware VCS version 116786 [0x1c832]
pve kernel: [   17.675113] <6>fioinf ioDrive 0000:01:00.0.0: Firmware VCS uid 0xaeb15671994a45642f91efbb214fa428e4245f8a
pve kernel: [   17.678295] <6>fioinf ioDrive 0000:01:00.0.0: Powercut flush: Enabled
pve kernel: [   17.892009] <6>fioinf ioDrive 0000:01:00.0.0: PCIe power monitor enabled (master). Limit set to 24.750 watts.
pve kernel: [   17.892966] <6>fioinf ioDrive 0000:01:00.0.0: Thermal monitoring: Enabled
pve kernel: [   17.893686] <6>fioinf ioDrive 0000:01:00.0.0: Hardware temperature alarm set for 100C.
pve kernel: [   18.067966] <6>fioinf ioDrive 0000:01:00.0: Found device fct1 (Fusion-io ioDrive2 3000GB 0000:01:00.0) on pipeline 0
pve kernel: [   18.085807] <3>fioerr Fusion-io ioDrive2 3000GB 0000:01:00.0: failed to map append request
pve kernel: [   18.086715] <3>fioerr Fusion-io ioDrive2 3000GB 0000:01:00.0: request page program 000000001f78a0b3 failed -22
pve kernel: [   18.524044] <6>fioinf ioDrive 0000:01:00.0.0: stuck flush request on startup detected, retry iteration 1 of 3...
pve kernel: [   18.525352] <6>fioinf ioDrive 0000:01:00.0.0: Starting master controller
pve kernel: [   18.607946] <6>fioinf ioDrive 0000:01:00.0.0: PMP Address: 1 1 1
pve kernel: [   18.748108] <6>fioinf ioDrive 0000:01:00.0.0: SMP Controller Firmware APP  version 1.0.21 0
pve kernel: [   18.749380] <6>fioinf ioDrive 0000:01:00.0.0: SMP Controller Firmware BOOT version 1.0.6 1
pve kernel: [   19.392198] <6>fioinf ioDrive 0000:01:00.0.0: Required PCIE bandwidth 2.000 GBytes per sec
pve kernel: [   19.393437] <6>fioinf ioDrive 0000:01:00.0.0: Board serial number is 1339D0ECB
pve kernel: [   19.394210] <6>fioinf ioDrive 0000:01:00.0.0: Adapter serial number is 1339D0ECB
pve kernel: [   19.394965] <6>fioinf ioDrive 0000:01:00.0.0: Default capacity        3000.000 GBytes
pve kernel: [   19.395699] <6>fioinf ioDrive 0000:01:00.0.0: Default sector size     512 bytes
pve kernel: [   19.396419] <6>fioinf ioDrive 0000:01:00.0.0: Rated endurance         37.00 PBytes
pve kernel: [   19.397128] <6>fioinf ioDrive 0000:01:00.0.0: 100C temp range hardware found
pve kernel: [   19.397836] <6>fioinf ioDrive 0000:01:00.0.0: Maximum capacity        3200.000 GBytes
pve kernel: [   19.984204] <6>fioinf ioDrive 0000:01:00.0.0: Firmware version 7.1.17 116786 (0x700411 0x1c832)
pve kernel: [   19.985479] <6>fioinf ioDrive 0000:01:00.0.0: Platform version 20
pve kernel: [   19.986206] <6>fioinf ioDrive 0000:01:00.0.0: Firmware VCS version 116786 [0x1c832]
pve kernel: [   19.986923] <6>fioinf ioDrive 0000:01:00.0.0: Firmware VCS uid 0xaeb15671994a45642f91efbb214fa428e4245f8a
pve kernel: [   19.990110] <6>fioinf ioDrive 0000:01:00.0.0: Powercut flush: Enabled
pve kernel: [   20.161018] <3>fioerr ioDrive 0000:01:00.0.0: could not find canonical value across 30 pads
pve kernel: [   20.459439] <3>fioerr ioDrive 0000:01:00.0.0: MINIMAL MODE DRIVER: hardware failure.
pve kernel: [   20.572041] <6>fioinf ioDrive 0000:01:00.0: Found device fct1 (Fusion-io ioDrive2 3000GB 0000:01:00.0) on pipeline 0
pve kernel: [   20.590207] <6>fioinf fct1: stuck flush request got better on retry.
pve kernel: [   20.590216] <6>fioinf Fusion-io ioDrive2 3000GB 0000:01:00.0: Attaching explicitly disabled
pve kernel: [   20.591113] <6>fioinf Fusion-io ioDrive2 3000GB 0000:01:00.0: probed fct1
pve kernel: [   20.592008] <3>fioerr Fusion-io ioDrive2 3000GB 0000:01:00.0: auto attach failed with error EINVAL: Invalid argument


Could you please help? I have no Idea, what I can do.
Thank you in advance
 
Last edited:
Hi everyone! I'm a new Proxmox user (just got it installed on my IBM X5 x3850 a few days ago), and I'm trying to get an 80GB FusionIO drive that I bought a few months ago up and working on Proxmox 7.1-2. I have followed the Proxmox 7 guide in the Vladimir's post (including running
Code:
ls /lib/modules | sudo xargs -n1 /usr/lib/dkms/dkms_autoinstaller start
and rebooting after that step), and I did not get any errors while doing so, but calling "fio-status -a" simply returns:

Code:
root@IBMx3850:/# fio-status -a

Found 1 ioMemory device in this system
Driver version: Driver not loaded

Adapter: ioDimm
        Fusion-io ioDimm3 80GB, Product Number:FS1-001-081-ES, SN:4776, FIO SN:4776
        ioDimm3, PN:001194011
        External Power: connected
        PCIe Power limit threshold: Disabled
        PCIe slot available power: 25.00W
        PCIe negotiated link: 4 lanes at 2.5 Gt/sec each, 1000.00 MBytes/sec total
        Connected ioMemory modules:
          95:00.0:      Product Number:FS1-001-081-ES, SN:4776

95:00.0 ioDimm3, Product Number:FS1-001-081-ES, SN:4776
        ioDimm3, PN:001194011
        PCI:95:00.0
        Vendor:1aed, Device:1005, Sub vendor:1aed, Sub device:1010
        Firmware v4.0.1, rev 41356 Public
        PCIe slot available power: 25.00W
        PCIe negotiated link: 4 lanes at 2.5 Gt/sec each, 1000.00 MBytes/sec total
        Internal temperature: 50.69 degC, max 51.68 degC
        Internal voltage: avg 0.97V, max 2.38V
        Aux voltage: avg 2.48V, max 2.38V

root@IBMx3850:/#

I tried going into /etc/lvm/lvm.conf and changing the global { activation } variable from 1 to 0 and rebooting, with no effect.

I tried going into /etc/modprobe.d/iomemory-vsl.conf and adding
Code:
options iomemory-vsl global_slot_power_limit_mw=50000
, with no effect.

I don't know what I'm doing wrong or what else to try, does anyone have any ideas/suggestions?
 
Heads up fellow FusionIO users. I installed the latest update that included the new kernel version 5.15.30-2 and it completely borked the DKMS build for the driver. Spent the last 30 minutes or so validating that my test machine successfully rolled back (Thank you HPE, your servers boot slow as a slug!)

The RemixVSL repo says "Generally main should be checked out. main is completely backwards compatible for all 5. The latest working tested kernel is 5.16.14." but this goes a bit above my skill level <shrug>

Code:
DKMS make.log for iomemory-vsl-3.2.16 for kernel 5.15.30-2-pve (x86_64)
Wed 04 May 2022 08:41:52 PM EDT
sed -i 's/Proprietary/GPL/g' Kbuild

Change found in target kernel: KERNELVER KERNEL_SRC
Running clean before building driver

make[1]: Entering directory '/var/lib/dkms/iomemory-vsl/3.2.16/build'
make \
        -j40 \
    -C /lib/modules/5.15.30-2-pve/build \
    FIO_DRIVER_NAME=iomemory-vsl \
    FUSION_DRIVER_DIR=/var/lib/dkms/iomemory-vsl/3.2.16/build \
    M=/var/lib/dkms/iomemory-vsl/3.2.16/build \
    EXTRA_CFLAGS+="-I/var/lib/dkms/iomemory-vsl/3.2.16/build/include -DBUILDING_MODULE -DLINUX_IO_SCHED -Wall -Werror" \
    KFIO_LIB=kfio/x86_64_cc102_libkfio.o_shipped \
    clean
make[2]: Entering directory '/usr/src/linux-headers-5.15.30-2-pve'
make[2]: Leaving directory '/usr/src/linux-headers-5.15.30-2-pve'
rm -rf include/fio/port/linux/kfio_config.h kfio_config license.c
make[1]: Leaving directory '/var/lib/dkms/iomemory-vsl/3.2.16/build'
if [ "102" -gt "74" ];then \
    if [ ! -f "kfio/x86_64_cc102_libkfio.o_shipped" ];then \
        cp kfio/x86_64_cc74_libkfio.o_shipped kfio/x86_64_cc102_libkfio.o_shipped; \
    fi \
fi
./kfio_config.sh -a x86_64 -o include/fio/port/linux/kfio_config.h -k /lib/modules/5.15.30-2-pve/build -p -d /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio_config -l 0 -s /lib/modules/5.15.30-2-pve/source
Detecting Kernel Flags
Config dir         : /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio_config
Output file        : include/fio/port/linux/kfio_config.h
Kernel output dir  : /lib/modules/5.15.30-2-pve/build
Kernel source dir  : /lib/modules/5.15.30-2-pve/build
Starting tests:
  1651711312.643  KFIOC_X_PROC_CREATE_DATA_WANTS_PROC_OPS...
  1651711312.686  KFIOC_X_TASK_HAS_CPUS_MASK...
  1651711312.729  KFIOC_X_LINUX_HAS_PART_STAT_H...
  1651711312.774  KFIOC_X_BLK_ALLOC_QUEUE_NODE_EXISTS...
  1651711312.817  KFIOC_X_HAS_MAKE_REQUEST_FN...
  1651711312.861  KFIOC_X_GENHD_PART0_IS_A_POINTER...
  1651711312.906  KFIOC_X_BIO_HAS_BI_BDEV...
Started tests, waiting for completions...
  1651711313.964  KFIOC_X_PROC_CREATE_DATA_WANTS_PROC_OPS=1
  1651711313.978  KFIOC_X_TASK_HAS_CPUS_MASK=1
  1651711313.992  KFIOC_X_LINUX_HAS_PART_STAT_H=1
  1651711314.006  KFIOC_X_BLK_ALLOC_QUEUE_NODE_EXISTS=0
  1651711314.019  KFIOC_X_HAS_MAKE_REQUEST_FN=0
  1651711314.033  KFIOC_X_GENHD_PART0_IS_A_POINTER=1
  1651711315.051  KFIOC_X_BIO_HAS_BI_BDEV=1
Finished
1651711315.057  Exiting
Preserving configdir due to '-p' option: /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio_config
make: git: No such file or directory
/var/lib/dkms/iomemory-vsl/3.2.16/build/module_operations.sh: line 40: git: command not found
/var/lib/dkms/iomemory-vsl/3.2.16/build/module_operations.sh: line 42: git: command not found
make \
    -j40 \
-C /lib/modules/5.15.30-2-pve/build \
FIO_DRIVER_NAME=iomemory-vsl \
FUSION_DRIVER_DIR=/var/lib/dkms/iomemory-vsl/3.2.16/build \
M=/var/lib/dkms/iomemory-vsl/3.2.16/build \
EXTRA_CFLAGS+="-I/var/lib/dkms/iomemory-vsl/3.2.16/build/include -DBUILDING_MODULE -DLINUX_IO_SCHED -Wall -Werror" \
INSTALL_MOD_DIR=extra/fio \
INSTALL_MOD_PATH= \
KFIO_LIB=kfio/x86_64_cc102_libkfio.o_shipped \
modules
make[1]: Entering directory '/usr/src/linux-headers-5.15.30-2-pve'
printf '#include "linux/module.h"\nMODULE_LICENSE("GPL");\n' >/var/lib/dkms/iomemory-vsl/3.2.16/build/license.c
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/main.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/pci.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/sysrq.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/driver_init.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/errno.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/state.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kcache.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kfile.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kmem.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio_common.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kcpu.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kscatter.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/ktime.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/sched.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/cdev.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kcondvar.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kinfo.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kexports.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/khotplug.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kcsr.o
  SHIPPED /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio/x86_64_cc102_libkfio.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/module_param.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/license.o
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c: In function ‘kfio_expose_disk’:
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:356:19: error: implicit declaration of function ‘alloc_disk’; did you mean ‘alloc_uid’? [-Werror=implicit-function-declaration]
  356 |     dp->gd = gd = alloc_disk(FIO_NUM_MINORS);
      |                   ^~~~~~~~~~
      |                   alloc_uid
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:356:17: error: assignment to ‘struct gendisk *’ from ‘int’ makes pointer from integer without a cast [-Werror=int-conversion]
  356 |     dp->gd = gd = alloc_disk(FIO_NUM_MINORS);
      |                 ^
In file included from /var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:50:
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c: In function ‘kfio_destroy_disk’:
/var/lib/dkms/iomemory-vsl/3.2.16/build/include/kblock_meta.h:38:20: error: implicit declaration of function ‘bdgrab’; did you mean ‘igrab’? [-Werror=implicit-function-declaration]
   38 |   #define GET_BDEV bdgrab(disk->gd->part0)
      |                    ^~~~~~
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:423:16: note: in expansion of macro ‘GET_BDEV’
  423 |         bdev = GET_BDEV;
      |                ^~~~~~~~
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:423:14: error: assignment to ‘struct block_device *’ from ‘int’ makes pointer from integer without a cast [-Werror=int-conversion]
  423 |         bdev = GET_BDEV;
      |              ^
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c: In function ‘kfio_bdput’:
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:500:5: error: implicit declaration of function ‘bdput’; did you mean ‘fdput’? [-Werror=implicit-function-declaration]
  500 |     bdput(bdev);
      |     ^~~~~
      |     fdput
In file included from /var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:50:
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c: In function ‘kfio_alloc_queue’:
/var/lib/dkms/iomemory-vsl/3.2.16/build/include/kblock_meta.h:33:27: error: implicit declaration of function ‘blk_alloc_queue’; did you mean ‘kfio_alloc_queue’? [-Werror=implicit-function-declaration]
   33 |   #define BLK_ALLOC_QUEUE blk_alloc_queue(node);
      |                           ^~~~~~~~~~~~~~~
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:960:10: note: in expansion of macro ‘BLK_ALLOC_QUEUE’
  960 |     rq = BLK_ALLOC_QUEUE;
      |          ^~~~~~~~~~~~~~~
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:960:8: error: assignment to ‘struct request_queue *’ from ‘int’ makes pointer from integer without a cast [-Werror=int-conversion]
  960 |     rq = BLK_ALLOC_QUEUE;
      |        ^
cc1: all warnings being treated as errors
make[2]: *** [scripts/Makefile.build:285: /var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [Makefile:1875: /var/lib/dkms/iomemory-vsl/3.2.16/build] Error 2
make[1]: Leaving directory '/usr/src/linux-headers-5.15.30-2-pve'
make: *** [Makefile:134: modules] Error 2
 
Last edited:
Heads up fellow FusionIO users. I installed the latest update that included the new kernel version 5.15.30-2 and it completely borked the DKMS build for the driver. Spent the last 30 minutes or so validating that my test machine successfully rolled back (Thank you HPE, your servers boot slow as a slug!)

The RemixVSL repo says "Generally main should be checked out. main is completely backwards compatible for all 5. The latest working tested kernel is 5.16.14." but this goes a bit above my skill level <shrug>

Code:
DKMS make.log for iomemory-vsl-3.2.16 for kernel 5.15.30-2-pve (x86_64)
Wed 04 May 2022 08:41:52 PM EDT
sed -i 's/Proprietary/GPL/g' Kbuild

Change found in target kernel: KERNELVER KERNEL_SRC
Running clean before building driver

make[1]: Entering directory '/var/lib/dkms/iomemory-vsl/3.2.16/build'
make \
        -j40 \
    -C /lib/modules/5.15.30-2-pve/build \
    FIO_DRIVER_NAME=iomemory-vsl \
    FUSION_DRIVER_DIR=/var/lib/dkms/iomemory-vsl/3.2.16/build \
    M=/var/lib/dkms/iomemory-vsl/3.2.16/build \
    EXTRA_CFLAGS+="-I/var/lib/dkms/iomemory-vsl/3.2.16/build/include -DBUILDING_MODULE -DLINUX_IO_SCHED -Wall -Werror" \
    KFIO_LIB=kfio/x86_64_cc102_libkfio.o_shipped \
    clean
make[2]: Entering directory '/usr/src/linux-headers-5.15.30-2-pve'
make[2]: Leaving directory '/usr/src/linux-headers-5.15.30-2-pve'
rm -rf include/fio/port/linux/kfio_config.h kfio_config license.c
make[1]: Leaving directory '/var/lib/dkms/iomemory-vsl/3.2.16/build'
if [ "102" -gt "74" ];then \
    if [ ! -f "kfio/x86_64_cc102_libkfio.o_shipped" ];then \
        cp kfio/x86_64_cc74_libkfio.o_shipped kfio/x86_64_cc102_libkfio.o_shipped; \
    fi \
fi
./kfio_config.sh -a x86_64 -o include/fio/port/linux/kfio_config.h -k /lib/modules/5.15.30-2-pve/build -p -d /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio_config -l 0 -s /lib/modules/5.15.30-2-pve/source
Detecting Kernel Flags
Config dir         : /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio_config
Output file        : include/fio/port/linux/kfio_config.h
Kernel output dir  : /lib/modules/5.15.30-2-pve/build
Kernel source dir  : /lib/modules/5.15.30-2-pve/build
Starting tests:
  1651711312.643  KFIOC_X_PROC_CREATE_DATA_WANTS_PROC_OPS...
  1651711312.686  KFIOC_X_TASK_HAS_CPUS_MASK...
  1651711312.729  KFIOC_X_LINUX_HAS_PART_STAT_H...
  1651711312.774  KFIOC_X_BLK_ALLOC_QUEUE_NODE_EXISTS...
  1651711312.817  KFIOC_X_HAS_MAKE_REQUEST_FN...
  1651711312.861  KFIOC_X_GENHD_PART0_IS_A_POINTER...
  1651711312.906  KFIOC_X_BIO_HAS_BI_BDEV...
Started tests, waiting for completions...
  1651711313.964  KFIOC_X_PROC_CREATE_DATA_WANTS_PROC_OPS=1
  1651711313.978  KFIOC_X_TASK_HAS_CPUS_MASK=1
  1651711313.992  KFIOC_X_LINUX_HAS_PART_STAT_H=1
  1651711314.006  KFIOC_X_BLK_ALLOC_QUEUE_NODE_EXISTS=0
  1651711314.019  KFIOC_X_HAS_MAKE_REQUEST_FN=0
  1651711314.033  KFIOC_X_GENHD_PART0_IS_A_POINTER=1
  1651711315.051  KFIOC_X_BIO_HAS_BI_BDEV=1
Finished
1651711315.057  Exiting
Preserving configdir due to '-p' option: /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio_config
make: git: No such file or directory
/var/lib/dkms/iomemory-vsl/3.2.16/build/module_operations.sh: line 40: git: command not found
/var/lib/dkms/iomemory-vsl/3.2.16/build/module_operations.sh: line 42: git: command not found
make \
    -j40 \
-C /lib/modules/5.15.30-2-pve/build \
FIO_DRIVER_NAME=iomemory-vsl \
FUSION_DRIVER_DIR=/var/lib/dkms/iomemory-vsl/3.2.16/build \
M=/var/lib/dkms/iomemory-vsl/3.2.16/build \
EXTRA_CFLAGS+="-I/var/lib/dkms/iomemory-vsl/3.2.16/build/include -DBUILDING_MODULE -DLINUX_IO_SCHED -Wall -Werror" \
INSTALL_MOD_DIR=extra/fio \
INSTALL_MOD_PATH= \
KFIO_LIB=kfio/x86_64_cc102_libkfio.o_shipped \
modules
make[1]: Entering directory '/usr/src/linux-headers-5.15.30-2-pve'
printf '#include "linux/module.h"\nMODULE_LICENSE("GPL");\n' >/var/lib/dkms/iomemory-vsl/3.2.16/build/license.c
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/main.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/pci.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/sysrq.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/driver_init.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/errno.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/state.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kcache.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kfile.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kmem.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio_common.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kcpu.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kscatter.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/ktime.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/sched.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/cdev.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kcondvar.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kinfo.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kexports.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/khotplug.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/kcsr.o
  SHIPPED /var/lib/dkms/iomemory-vsl/3.2.16/build/kfio/x86_64_cc102_libkfio.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/module_param.o
  CC [M]  /var/lib/dkms/iomemory-vsl/3.2.16/build/license.o
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c: In function ‘kfio_expose_disk’:
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:356:19: error: implicit declaration of function ‘alloc_disk’; did you mean ‘alloc_uid’? [-Werror=implicit-function-declaration]
  356 |     dp->gd = gd = alloc_disk(FIO_NUM_MINORS);
      |                   ^~~~~~~~~~
      |                   alloc_uid
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:356:17: error: assignment to ‘struct gendisk *’ from ‘int’ makes pointer from integer without a cast [-Werror=int-conversion]
  356 |     dp->gd = gd = alloc_disk(FIO_NUM_MINORS);
      |                 ^
In file included from /var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:50:
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c: In function ‘kfio_destroy_disk’:
/var/lib/dkms/iomemory-vsl/3.2.16/build/include/kblock_meta.h:38:20: error: implicit declaration of function ‘bdgrab’; did you mean ‘igrab’? [-Werror=implicit-function-declaration]
   38 |   #define GET_BDEV bdgrab(disk->gd->part0)
      |                    ^~~~~~
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:423:16: note: in expansion of macro ‘GET_BDEV’
  423 |         bdev = GET_BDEV;
      |                ^~~~~~~~
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:423:14: error: assignment to ‘struct block_device *’ from ‘int’ makes pointer from integer without a cast [-Werror=int-conversion]
  423 |         bdev = GET_BDEV;
      |              ^
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c: In function ‘kfio_bdput’:
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:500:5: error: implicit declaration of function ‘bdput’; did you mean ‘fdput’? [-Werror=implicit-function-declaration]
  500 |     bdput(bdev);
      |     ^~~~~
      |     fdput
In file included from /var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:50:
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c: In function ‘kfio_alloc_queue’:
/var/lib/dkms/iomemory-vsl/3.2.16/build/include/kblock_meta.h:33:27: error: implicit declaration of function ‘blk_alloc_queue’; did you mean ‘kfio_alloc_queue’? [-Werror=implicit-function-declaration]
   33 |   #define BLK_ALLOC_QUEUE blk_alloc_queue(node);
      |                           ^~~~~~~~~~~~~~~
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:960:10: note: in expansion of macro ‘BLK_ALLOC_QUEUE’
  960 |     rq = BLK_ALLOC_QUEUE;
      |          ^~~~~~~~~~~~~~~
/var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.c:960:8: error: assignment to ‘struct request_queue *’ from ‘int’ makes pointer from integer without a cast [-Werror=int-conversion]
  960 |     rq = BLK_ALLOC_QUEUE;
      |        ^
cc1: all warnings being treated as errors
make[2]: *** [scripts/Makefile.build:285: /var/lib/dkms/iomemory-vsl/3.2.16/build/kblock.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [Makefile:1875: /var/lib/dkms/iomemory-vsl/3.2.16/build] Error 2
make[1]: Leaving directory '/usr/src/linux-headers-5.15.30-2-pve'
make: *** [Makefile:134: modules] Error 2
Oh that's a bummer... I just got this card and wanted to try it on proxmox :(
 
Same here. You just need to download and install the latest drivers. Only bad side affect is that I lost everything that was on the drive when the new drivers where installed. I had recent backups so not a big deal.
 
cali0028 - I experienced the same issue with my fusion-io scale 2 this morning. I was able to get my server back up and running fairly quickly though. From the GNU GRUB boot menu, select "Advanced options for Proxmox VE GNU/Linux" then "Proxmox VE GNU/Linux, with Linux 5.13.19-6-pve"... After a reboot with kernel 5.13.19-6 - all is well. I have kernel 5.15.35-1 and 5.15.30-2 options that refuse to play well with my fusion-io board...

Before selecting a different kernel, I tried driver reinstallation and it failed:

Code:
root@t630:/home/temp# dkms build -m iomemory-vsl -v 3.2.16

Kernel preparation unnecessary for this kernel.  Skipping...

Building module:
cleaning build area...
'make' DKMS_KERNEL_VERSION=5.15.30-2-pve........(bad exit status: 2)
Error! Bad return status for module build on kernel: 5.15.30-2-pve (x86_64)
Consult /var/lib/dkms/iomemory-vsl/3.2.16/build/make.log for more information.
 
Are you guys grabbing the latest version( aka master ) of the driver? Using the one listed in the first post and trying to compile it against the 5.30.X versions of the kernel will only fail to build.
 
kromberg - No I am using the following files (as in the first post):
Code:
iomemory-vsl-5.12.1
fio-common_3.2.16.1731-1.0_amd64.deb
fio-firmware-fusion_3.2.16.20180821-1_all.deb
fio-sysvinit_3.2.16.1731-1.0_all.deb
fio-util_3.2.16.1731-1.0_amd64.deb
What are you using and could you please provide a link? Also, is it working on kernel 5.15.35-1?
 
https://github.com/RemixVSL/iomemory-vsl

Just did an install this morning. You just need to replace the steps of downloading the iomemory-vsl zip file with the download from the above github link with a rename of the unzipped directory.

root@odin:~# uname -a && fio-status -a Linux odin 5.15.35-1-pve #1 SMP PVE 5.15.35-3 (Wed, 11 May 2022 07:57:51 +0200) x86_64 GNU/Linux Found 1 ioMemory device in this system Driver version: 3.2.16 build 1731 Adapter: Single Controller Adapter Fusion-io ioScale 3.20TB, Product Number:F11-002-3T20-CS-0001, SN:1334D0F51, FIO SN:1334D0F51 ioDrive2 Adapter Controller, PN:PA005064001 External Power: NOT connected PCIe Bus voltage: avg 11.70V PCIe Bus current: avg 0.96A PCIe Bus power: avg 13.03W PCIe Power limit threshold: 24.75W PCIe slot available power: unavailable PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total Connected ioMemory modules: fct0: Product Number:F11-002-3T20-CS-0001, SN:1334D0F51 fct0 Attached ioDrive2 Adapter Controller, Product Number:F11-002-3T20-CS-0001, SN:1334D0F51 ioDrive2 Adapter Controller, PN:PA005064001 SMP(AVR) Versions: App Version: 1.0.21.0, Boot Version: 1.0.6.1 Located in slot 0 Center of ioDrive2 Adapter Controller SN:1334D0F51 Write governing: Active Powerloss protection: protected Last Power Monitor Incident: 2 sec PCI:03:00.0, Slot Number:2 Vendor:1aed, Device:2001, Sub vendor:1aed, Sub device:2001 Firmware v7.1.17, rev 116786 Public 3200.00 GBytes device size Format: v500, 6250000000 sectors of 512 bytes PCIe slot available power: unavailable PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total Internal temperature: 54.63 degC, max 58.57 degC Internal voltage: avg 1.02V, max 1.02V Aux voltage: avg 2.48V, max 2.48V Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00% Active media: 100.00% Rated PBW: 20.00 PB, 92.99% remaining Lifetime data volumes: Physical bytes written: 1,401,931,439,325,560 Physical bytes read : 5,046,911,377,136,928 RAM usage: Current: 995,319,552 bytes Peak : 995,319,552 bytes Contained VSUs: fioa: ID:0, UUID:b9f187f8-2e1b-674f-a62e-33fc5b8f7197 fioa State: Online, Type: block device ID:0, UUID:b9f187f8-2e1b-674f-a62e-33fc5b8f7197 3200.00 GBytes device size Format: 6250000000 sectors of 512 bytes
 
I am hitting the power write governing because of the default wattage limit of 25 watts. Anyone know how to bump that up? I found this, but I cant find the fio-config app:

fio-config -p FIO_EXTERNAL_POWER_OVERRIDE <device serial number>:<power in watts>

Also found that it can be set using: /etc/modprobe.d/iomemory-vsl.conf

options iomemory-vsl global_slot_power_limit_mw=35000

But you need to set FIO_EXTERNAL_POWER_OVERRIDE which I cant find howto or to what.
 
Last edited:
Got it figured out The following will list out all parameters for all modules currently loaded:
Code:
cat /proc/modules | cut -f 1 -d " " | while read module; do  echo "Module: $module";  if [ -d "/sys/module/$module/parameters" ]; then   ls /sys/module/$module/parameters/ | while read parameter; do    echo -n "Parameter: $parameter --> ";    cat /sys/module/$module/parameters/$parameter;   done;  fi;  echo; done > out


Used that info to create: /etc/modprobe.d/iomemory-vsl.conf
options iomemory-vsl external_power_override=1 options iomemory-vsl global_slot_power_limit_mw=35000

Update the initramfs: update-initramfs -u && reboot

root@odin:~# fio-status -a Found 1 ioMemory device in this system Driver version: 3.2.16 build 1731 Adapter: Single Controller Adapter Fusion-io ioScale 3.20TB, Product Number:F11-002-3T20-CS-0001, SN:1334D0F51, FIO SN:1334D0F51 ioDrive2 Adapter Controller, PN:PA005064001 External Power: NOT connected PCIe Bus voltage: avg 11.70V PCIe Bus current: avg 0.97A PCIe Bus power: avg 11.31W [B][U][I]PCIe Power limit threshold: 35.00W[/I][/U][/B] PCIe slot available power: unavailable PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total Connected ioMemory modules: fct0: Product Number:F11-002-3T20-CS-0001, SN:1334D0F51 fct0 Attached ioDrive2 Adapter Controller, Product Number:F11-002-3T20-CS-0001, SN:1334D0F51 ioDrive2 Adapter Controller, PN:PA005064001 SMP(AVR) Versions: App Version: 1.0.21.0, Boot Version: 1.0.6.1 Located in slot 0 Center of ioDrive2 Adapter Controller SN:1334D0F51 Powerloss protection: protected PCI:03:00.0, Slot Number:2 Vendor:1aed, Device:2001, Sub vendor:1aed, Sub device:2001 Firmware v7.1.17, rev 116786 Public 3200.00 GBytes device size Format: v500, 6250000000 sectors of 512 bytes PCIe slot available power: unavailable PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total Internal temperature: 56.60 degC, max 56.60 degC Internal voltage: avg 1.02V, max 1.02V Aux voltage: avg 2.48V, max 2.48V Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00% Active media: 100.00% Rated PBW: 20.00 PB, 92.99% remaining Lifetime data volumes: Physical bytes written: 1,402,699,219,658,944 Physical bytes read : 5,046,916,220,292,024 RAM usage: Current: 1,273,993,792 bytes Peak : 1,273,993,792 bytes Contained VSUs: fioa: ID:0, UUID:b9f187f8-2e1b-674f-a62e-33fc5b8f7197 fioa State: Online, Type: block device ID:0, UUID:b9f187f8-2e1b-674f-a62e-33fc5b8f7197 3200.00 GBytes device size Format: 6250000000 sectors of 512 bytes

With it set at 35W or 35000mW, it is not kicking in the power governor
 
cali0028 - I experienced the same issue with my fusion-io scale 2 this morning. I was able to get my server back up and running fairly quickly though. From the GNU GRUB boot menu, select "Advanced options for Proxmox VE GNU/Linux" then "Proxmox VE GNU/Linux, with Linux 5.13.19-6-pve"... After a reboot with kernel 5.13.19-6 - all is well. I have kernel 5.15.35-1 and 5.15.30-2 options that refuse to play well with my fusion-io board...

Before selecting a different kernel, I tried driver reinstallation and it failed:

Code:
root@t630:/home/temp# dkms build -m iomemory-vsl -v 3.2.16

Kernel preparation unnecessary for this kernel.  Skipping...

Building module:
cleaning build area...
'make' DKMS_KERNEL_VERSION=5.15.30-2-pve........(bad exit status: 2)
Error! Bad return status for module build on kernel: 5.15.30-2-pve (x86_64)
Consult /var/lib/dkms/iomemory-vsl/3.2.16/build/make.log for more information.
Thanks for the info. I will give that a try
 
https://github.com/RemixVSL/iomemory-vsl

Just did an install this morning. You just need to replace the steps of downloading the iomemory-vsl zip file with the download from the above github link with a rename of the unzipped directory.

root@odin:~# uname -a && fio-status -a Linux odin 5.15.35-1-pve #1 SMP PVE 5.15.35-3 (Wed, 11 May 2022 07:57:51 +0200) x86_64 GNU/Linux Found 1 ioMemory device in this system Driver version: 3.2.16 build 1731 Adapter: Single Controller Adapter Fusion-io ioScale 3.20TB, Product Number:F11-002-3T20-CS-0001, SN:1334D0F51, FIO SN:1334D0F51 ioDrive2 Adapter Controller, PN:PA005064001 External Power: NOT connected PCIe Bus voltage: avg 11.70V PCIe Bus current: avg 0.96A PCIe Bus power: avg 13.03W PCIe Power limit threshold: 24.75W PCIe slot available power: unavailable PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total Connected ioMemory modules: fct0: Product Number:F11-002-3T20-CS-0001, SN:1334D0F51 fct0 Attached ioDrive2 Adapter Controller, Product Number:F11-002-3T20-CS-0001, SN:1334D0F51 ioDrive2 Adapter Controller, PN:PA005064001 SMP(AVR) Versions: App Version: 1.0.21.0, Boot Version: 1.0.6.1 Located in slot 0 Center of ioDrive2 Adapter Controller SN:1334D0F51 Write governing: Active Powerloss protection: protected Last Power Monitor Incident: 2 sec PCI:03:00.0, Slot Number:2 Vendor:1aed, Device:2001, Sub vendor:1aed, Sub device:2001 Firmware v7.1.17, rev 116786 Public 3200.00 GBytes device size Format: v500, 6250000000 sectors of 512 bytes PCIe slot available power: unavailable PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total Internal temperature: 54.63 degC, max 58.57 degC Internal voltage: avg 1.02V, max 1.02V Aux voltage: avg 2.48V, max 2.48V Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00% Active media: 100.00% Rated PBW: 20.00 PB, 92.99% remaining Lifetime data volumes: Physical bytes written: 1,401,931,439,325,560 Physical bytes read : 5,046,911,377,136,928 RAM usage: Current: 995,319,552 bytes Peak : 995,319,552 bytes Contained VSUs: fioa: ID:0, UUID:b9f187f8-2e1b-674f-a62e-33fc5b8f7197 fioa State: Online, Type: block device ID:0, UUID:b9f187f8-2e1b-674f-a62e-33fc5b8f7197 3200.00 GBytes device size Format: 6250000000 sectors of 512 bytes
Would you mind posting the install code for the new folks?
 
Hi guys!
The problem with fresh kernels 5.15 is solved very simply.
And this is written on the github, where the VSL sources are stored.
You MUST add a kernel boot option to /etc/default/grub.
Like this:
GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt"
Added bootflag is iommu=pt
Please note, this parameter is for intel processors... for amd - there is another parameter..
you can find it on the link above.
after reboot - everything was fine build and started.

I think Vladimir needs to add this information to the header of this post.

Code:
root@nxx:~# uname -a
Linux nxx 5.15.35-1-pve #1 SMP PVE 5.15.35-3 (Wed, 11 May 2022 07:57:51 +0200) x86_64 GNU/Linux

root@nxx:~# fio-status -e

Found 1 ioMemory device in this system
Driver version: 3.2.16 build 1731

Adapter: Single Controller Adapter
        Fusion-io ioScale 3.20TB, Product Number:#########, SN:#########, FIO SN:#########
        Connected ioMemory modules:
          fct0: Product Number:#########, SN:#########

fct0    Attached
        ioDrive2 Adapter Controller, Product Number:#########, SN:#########, FIO SN:#########

        No warnings or errors detected.

fioa    State: Online, Type: block device
        ID:0, UUID:846cf0b9-818a-4700-90da-ed3850ab34c0
        3200.00 GBytes device size

root@nxx:~# modinfo iomemory-vsl
filename:       /lib/modules/5.15.35-1-pve/updates/dkms/iomemory-vsl.ko
license:        GPL
name:           iomemory_vsl
vermagic:       5.15.35-1-pve SMP mod_unload modversions
 
  • Like
Reactions: Psilospiral
zelenij - I can confirm that simply adding "iommu=pt" to /etc/default/grub like:

GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt"

and then

sudo update-grub

fixes the fusion-io kernel issue on my t630 when booting any kernel 5.15 or greater. Thank you!

The README.md has now been updated to clarify these settings over at https://github.com/RemixVSL/iomemory-vsl
 
Last edited:
Hey, @Psilospiral and others!

I haven't abandoned the thread, although I understand that updates are needed. My main concern at this moment is that regardless of the configs, flags and other tweaks my Windows 10 VM performance on Proxmox 7 is simply abysmal. I have 2 identical servers with the same VM deployed on both of them and the difference is staggering. There are 2 things that can cause this - either a bug inside the driver or some kernel-related issue. Either way - I'm not ready to endorse Fusion-Io cards to be used with Proxmox 7 and I personally stick to Proxmox 6 in my production environment.

Thank you for keeping the thread alive and I hope I can find a solution soon.
 
Hi, I have SX350 and used iomemory-vsl4 latest branch and it worked pretty well with latest proxmox.
I used iomemory-vsl4 not iomemory-vsl since SX350 is vsl4 afaik.

Linux pve 5.15.39-4-pve #1 SMP PVE 5.15.39-4 (Mon, 08 Aug 2022 15:11:15 +0200) x86_64 GNU/Linux

Now, the problem is that it doesnt show up on proxmox storage tab and I think this is known issue
https://github.com/RemixVSL/iomemory-vsl/issues/74

So based on above issue, I applied persistent rule
cp iomemory-vsl4/tools/udev/rules.d/60-persistent-fio.rules /etc/udev/rules.d/

However, after applying above rule, start up of system takes forever and system loses network

I checked syslog and found this

Aug 25 03:31:32 pve systemd-udevd[574]: fioa1: Worker [588] processing SEQNUM=4954 is taking a long time Aug 25 03:32:40 pve systemd-udevd[574]: fioa1: Worker [588] processing SEQNUM=4954 killed Aug 25 03:32:40 pve systemd-udevd[574]: Worker [588] terminated by signal 9 (KILL) Aug 25 03:32:40 pve systemd-udevd[574]: fioa1: Worker [588] failed

I wonder if anyone tried to apply persistent.rules

Thanks,
 
Hello, installed a couple SX300-3200 cards in two different computers. I am able to get them upgraded to 4.3.7 and working. However, I get the following error on both computers.

The bandwidth of the PCI slot is not optimal for the ioMemory.
 
Hi everybody,

I have got 2 ioDrive2 drives and am trying to install them on a Dell T410 server using the latest proxmox (7.3.6).
I have am using the scripts modified with the latest versions and updated the fIrmware succesfully.

The devices are visible but remain in Minimal state.
Code:
Mar  3 16:10:31 pxmx kernel: [   43.181040] <6>fioinf ioDrive 0000:06:00.0.0: Required PCIE bandwidth 2.000 GBytes per sec
Mar  3 16:10:31 pxmx kernel: [   43.181047] <6>fioinf ioDrive 0000:06:00.0.0: Board serial number is 1221D2168
Mar  3 16:10:31 pxmx kernel: [   43.181050] <6>fioinf ioDrive 0000:06:00.0.0: Adapter serial number is 1221D2168
Mar  3 16:10:31 pxmx kernel: [   43.181055] <6>fioinf ioDrive 0000:06:00.0.0: Default capacity        785.000 GBytes
Mar  3 16:10:31 pxmx kernel: [   43.181059] <6>fioinf ioDrive 0000:06:00.0.0: Default sector size     512 bytes
Mar  3 16:10:31 pxmx kernel: [   43.181061] <6>fioinf ioDrive 0000:06:00.0.0: Rated endurance         11.00 PBytes
Mar  3 16:10:31 pxmx kernel: [   43.181064] <6>fioinf ioDrive 0000:06:00.0.0: 100C temp range hardware found
Mar  3 16:10:31 pxmx kernel: [   43.181067] <6>fioinf ioDrive 0000:06:00.0.0: Maximum capacity        845.000 GBytes
Mar  3 16:10:31 pxmx kernel: [   45.688998] <6>fioinf ioDrive 0000:06:00.0.0: Firmware version 7.1.17 116786 (0x700411 0x1c832)
Mar  3 16:10:31 pxmx kernel: [   45.689007] <6>fioinf ioDrive 0000:06:00.0.0: Platform version 16.
Mar  3 16:10:31 pxmx kernel: [   45.689010] <6>fioinf ioDrive 0000:06:00.0.0: Firmware VCS version 116786 [0x1c832]
Mar  3 16:10:31 pxmx kernel: [   45.689018] <6>fioinf ioDrive 0000:06:00.0.0: Firmware VCS uid 0xaeb15671994a45642f91efbb214fa428e4245f8a
Mar  3 16:10:31 pxmx kernel: [   45.692138] <6>fioinf ioDrive 0000:06:00.0.0: Powercut flush: Enabled
Mar  3 16:10:31 pxmx kernel: [   45.997007] <6>fioinf ioDrive 0000:06:00.0.0: PCIe power monitor enabled (master). Limit set to 35.0 watts.
Mar  3 16:10:31 pxmx kernel: [   45.997019] <6>fioinf ioDrive 0000:06:00.0.0: Thermal monitoring: Enabled
Mar  3 16:10:31 pxmx kernel: [   45.997023] <6>fioinf ioDrive 0000:06:00.0.0: Hardware temperature alarm set for 100C.
Mar  3 16:10:31 pxmx kernel: [   46.168972] <6>fioinf ioDrive 0000:06:00.0: Found device fct0 (Fusion-io ioDrive2 785GB 0000:06:00.0) on pipeline 0
Mar  3 16:10:31 pxmx kernel: [   46.169492] <3>fioerr Fusion-io ioDrive2 785GB 0000:06:00.0: failed to map append request
Mar  3 16:10:31 pxmx kernel: [   46.169496] <3>fioerr Fusion-io ioDrive2 785GB 0000:06:00.0: request page program 000000009d1e4848 failed -22
Mar  3 16:10:31 pxmx kernel: [   47.224987] <6>fioinf ioDrive 0000:06:00.0.0: stuck flush request on startup detected, retry iteration 1 of 3...
Mar  3 16:10:31 pxmx kernel: [   47.224993] <6>fioinf ioDrive 0000:06:00.0.0: Starting master controller
Mar  3 16:10:31 pxmx kernel: [   47.308956] <6>fioinf ioDrive 0000:06:00.0.0: PMP Address: 1 1 1
Mar  3 16:10:31 pxmx kernel: [   47.532976] <6>fioinf ioDrive 0000:06:00.0.0: SMP Controller Firmware APP  version 1.0.35 0
Mar  3 16:10:31 pxmx kernel: [   47.532982] <6>fioinf ioDrive 0000:06:00.0.0: SMP Controller Firmware BOOT version 0.0.9 1
Mar  3 16:10:31 pxmx kernel: [   50.169008] <6>fioinf ioDrive 0000:06:00.0.0: Required PCIE bandwidth 2.000 GBytes per sec
Mar  3 16:10:31 pxmx kernel: [   50.169015] <6>fioinf ioDrive 0000:06:00.0.0: Board serial number is 1221D2168
Mar  3 16:10:31 pxmx kernel: [   50.169017] <6>fioinf ioDrive 0000:06:00.0.0: Adapter serial number is 1221D2168
Mar  3 16:10:31 pxmx kernel: [   50.169023] <6>fioinf ioDrive 0000:06:00.0.0: Default capacity        785.000 GBytes
Mar  3 16:10:31 pxmx kernel: [   50.169026] <6>fioinf ioDrive 0000:06:00.0.0: Default sector size     512 bytes
Mar  3 16:10:31 pxmx kernel: [   50.169029] <6>fioinf ioDrive 0000:06:00.0.0: Rated endurance         11.00 PBytes
Mar  3 16:10:31 pxmx kernel: [   50.169031] <6>fioinf ioDrive 0000:06:00.0.0: 100C temp range hardware found
Mar  3 16:10:31 pxmx kernel: [   50.169034] <6>fioinf ioDrive 0000:06:00.0.0: Maximum capacity        845.000 GBytes
Mar  3 16:10:31 pxmx kernel: [   52.860999] <6>fioinf ioDrive 0000:06:00.0.0: Firmware version 7.1.17 116786 (0x700411 0x1c832)
Mar  3 16:10:31 pxmx kernel: [   52.861007] <6>fioinf ioDrive 0000:06:00.0.0: Platform version 16.
Mar  3 16:10:31 pxmx kernel: [   52.861010] <6>fioinf ioDrive 0000:06:00.0.0: Firmware VCS version 116786 [0x1c832]
Mar  3 16:10:31 pxmx kernel: [   52.861018] <6>fioinf ioDrive 0000:06:00.0.0: Firmware VCS uid 0xaeb15671994a45642f91efbb214fa428e4245f8a
Mar  3 16:10:31 pxmx kernel: [   52.864101] <6>fioinf ioDrive 0000:06:00.0.0: Powercut flush: Enabled
Mar  3 16:10:31 pxmx kernel: [   53.117610] <3>fioerr ioDrive 0000:06:00.0.0: could not find canonical value across 16 pads
Mar  3 16:10:31 pxmx kernel: [   53.415276] <3>fioerr ioDrive 0000:06:00.0.0: MINIMAL MODE DRIVER: hardware failure.


status:

Code:
root@pxmx:/var/log# fio-status -a

Found 2 ioMemory devices in this system
Driver version: 3.2.16 build 1731

Adapter: Single Controller Adapter
        Fusion-io ioDrive2 785GB, Product Number:F00-001-785G-CS-0001, SN:1221D2168, FIO SN:1221D2168
        ioDrive2 Adapter Controller, PN:PA004137008
        External Power: NOT connected
        PCIe Power limit threshold: 35.00W
        PCIe slot available power: 25.00W
        PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total
        Connected ioMemory modules:
          fct0: Product Number:F00-001-785G-CS-0001, SN:1221D2168

fct0    Status unknown: Driver is in MINIMAL MODE:
                Device has a hardware failure
        ioDrive2 Adapter Controller, Product Number:F00-001-785G-CS-0001, SN:1221D2168
!! ---> There are active errors or warnings on this device!  Read below for details.
        ioDrive2 Adapter Controller, PN:PA004137008
        SMP(AVR) Versions: App Version: 1.0.35.0, Boot Version: 0.0.9.1
        Located in slot 0 Center of ioDrive2 Adapter Controller SN:1221D2168
        Powerloss protection: not available
        PCI:06:00.0, Slot Number:5
        Vendor:1aed, Device:2001, Sub vendor:1aed, Sub device:2001
        Firmware v7.1.17, rev 116786 Public
        Geometry and capacity information not available.
        Format: not low-level formatted
        PCIe slot available power: 25.00W
        PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total
        Internal temperature: 55.61 degC, max 55.61 degC
        Internal voltage: avg 1.02V, max 1.02V
        Aux voltage: avg 2.49V, max 2.49V
        Rated PBW: 11.00 PB
        Lifetime data volumes:
           Physical bytes written: 0
           Physical bytes read   : 0
        RAM usage:
           Current: 0 bytes
           Peak   : 0 bytes

        ACTIVE WARNINGS:
            The ioMemory is currently running in a minimal state.

Adapter: Single Controller Adapter
        Fusion-io ioDrive2 785GB, Product Number:F00-001-785G-CS-0001, SN:1239D2015, FIO SN:1239D2015
        ioDrive2 Adapter Controller, PN:PA004137008
        External Power: NOT connected
        PCIe Power limit threshold: 35.00W
        PCIe slot available power: 25.00W
        PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total
        Connected ioMemory modules:
          fct1: Product Number:F00-001-785G-CS-0001, SN:1239D2015

fct1    Status unknown: Driver is in MINIMAL MODE:
                Device has a hardware failure
        ioDrive2 Adapter Controller, Product Number:F00-001-785G-CS-0001, SN:1239D2015
!! ---> There are active errors or warnings on this device!  Read below for details.
        ioDrive2 Adapter Controller, PN:PA004137008
        SMP(AVR) Versions: App Version: 1.0.35.0, Boot Version: 0.0.9.1
        Located in slot 0 Center of ioDrive2 Adapter Controller SN:1239D2015
        Powerloss protection: not available
        PCI:02:00.0, Slot Number:1
        Vendor:1aed, Device:2001, Sub vendor:1aed, Sub device:2001
        Firmware v7.1.17, rev 116786 Public
        Geometry and capacity information not available.
        Format: not low-level formatted
        PCIe slot available power: 25.00W
        PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total
        Internal temperature: 55.61 degC, max 55.61 degC
        Internal voltage: avg 1.02V, max 1.02V
        Aux voltage: avg 2.49V, max 2.50V
        Rated PBW: 11.00 PB
        Lifetime data volumes:
           Physical bytes written: 0
           Physical bytes read   : 0
        RAM usage:
           Current: 0 bytes
           Peak   : 0 bytes

        ACTIVE WARNINGS:
            The ioMemory is currently running in a minimal state.




Any suggestions??
 
Hi everybody,
Thanks to this topic and one over at STH (servethehome) i have managed to install the drivers and get the drives working on the newest 6.1 kernel of proxmox 7.3.
At least, they are interacting with all tools now. I can format, create volumes and copy stuff.

If anyone is interested here's how i did it:

1. removed all old drivers and all installed fio software and rebooted

Code:
sudo modprobe -R iomemory-vsl

sudo apt-get remove fio-*

reboot

2. Updated the kernel to 6.1 (see this topic)

3. I Added
Code:
 intel_iommu=on iommu=pt
to the kernel commandline.

Depending weather you are booting with Grub edit /etc/default/grub
and do a "grub-update"

or if you are booting with systemd edit /etc/kernel/cmdline
and do a "proxmox-boot-tool refresh"

After that : reboot

4. install drivers like described on this page.
be sure to stick to the "main" tag.
rebooted again and voilá, they are working fine :)


And now: for the next step...
Since i have two of them i want to create a mirror on wich i serve all the VM's (not storage) to keep my io from spiking.
The drives are not showing up as disks in the webgui so i followed the instructions in this post which results in two thin LV's.
But i can not move any opf the existing VM disks to these LV'sn, nor can i create a mirror this way.
Since i am new to proxmox, i have still a lot to learn about storage types.
Does anyone have a suggestion on how to approach this?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!