[TUTORIAL] Configuring Fusion-Io (SanDisk) ioDrive, ioDrive2, ioScale and ioScale2 cards with Proxmox

Vladimir Bulgaru

Renowned Member
Jun 1, 2019
216
63
68
38
Moscow, Russia
Foreword and update
The guide was long overdue for an update, but luckily yesterday i've managed to test a couple of things that allow me to be quite certain that the setup is stable and can be used in production. Several notes. The drivers are based on snuf drivers. From my experience with Western Digital, the current state of things is a train-wreck and they are not only not willing, but also probably incapable of properly maintaining the drivers. This guide applies to both kernels: 4.x (Proxmox 5) and 5.x (Proxmox 6). Pay attention to the drivers downloaded not to pick the wrong ones. All drivers are from my Dropbox repository. This will prevent issues with folder naming and allow for a single copy-paste action to deploy the drivers. This guide was tested for latest versions of Proxmox 5 and Proxmox 6.

Quick update #2: Just purchased a SX350 (ioDrive3) ioMemory card. A detailed instruction for deployment will be posted as soon as i figure it out. I know that we're all doing our part for the open source community, but wanted to thank everyone who has donated or requested paid support, since this really helps doing this digging and testing.

Quick update #3: Updated the guide for Proxmox 7.

–––

Hello, everyone!

Spent the past 24h figuring things out and decided to share the knowledge, since it's scarce on the subject.

Background
I was trying to figure out the optimal setup for running VMs on Proxmox. The main bottleneck always seemed to be IO related (was thinking IOPS & RW speed). I started investigating the appropriate solution and decided that the only way to know for sure is to actually test things out. I wanted to try 2 scenarios - using 4 SAS HDDs in a ZFS RAID10 pool with ZIL/SLOG on a PCIe SSD and simply run VMs off an LVM-thin pool off a PCIe SSD. Since i was using the Dell R620 old workhorse, the only suggestion that i've found from Dell regarding compatible PCIe SSDs was Fusion-Io iodrive2. So i got one of those. Turned out that it's not an easy thing to get it working on Proxmox.

The problems
  1. Fusion-Io drivers were designed with Enterprise OSs in mind (RHEL, SLES, ESXi, etc). Later on Sandisk was kind enough to expand the drivers to other OSs (updated list is here: https://link-app.sandisk.com/Home/SoftwareDownload). But you cannot get these drivers to work directly on Proxmox and you need to recompile them from source for your kernel. Moreover, only kernels up to 4.x are supported.
  2. The kernels do get updated and it would be pain to lose the drive with every update.
Mapping the solution
  1. The very first thread i've encountered talking about the idea of using Fusion-Io with Proxmox was this one: https://www.reddit.com/r/homelab/comments/8lp8hw/fusionio_drivers_for_proxmox/
  2. I understood that i would likely need the source of the drivers and the kernel headers.
  3. The post referenced this GitHub repo: https://github.com/snuf/iomemory-vsl. The drivers there are not really necessary, since you can actually get the updated ones from SanDisk directly and actually compile them against latest Proxmox kernel (as you will later find out). What caught my eye was the DKMS approach to keep the drivers in line with the kernel updates.
  4. I've noticed that simply installing the kernel headers didn't work properly when using apt install pve-headers so i discovered that the precise version needs to be indicated. The current kernel version is available via uname -r
The actual process
  1. (This step applies only to Proxmox 5 instances) Copy this whole code to the Proxmox 5 console. It will automatically handle the drivers download, dependencies download, preparations and installation:
    Code:
    apt update && apt install --assume-yes pve-headers pve-headers-`uname -r` zip unzip gcc fakeroot build-essential debhelper rsync dkms && apt upgrade && apt autoremove --assume-yes && \
    mkdir /home/temp && cd /home/temp && \
    wget -O iomemory-vsl.zip https://www.dropbox.com/s/ktj2ive9elah04n/iomemory-vsl-4.20.1.zip?dl=1 && \
    wget -O fio-common_3.2.16.1731-1.0_amd64.deb https://www.dropbox.com/s/pd2ohfaufhwqc34/fio-common_3.2.16.1731-1.0_amd64.deb?dl=1 && \
    wget -O fio-firmware-fusion_3.2.16.20180821-1_all.deb https://www.dropbox.com/s/kcn5agi6lyikicf/fio-firmware-fusion_3.2.16.20180821-1_all.deb?dl=1 && \
    wget -O fio-sysvinit_3.2.16.1731-1.0_all.deb https://www.dropbox.com/s/g39l6lg9of6eqze/fio-sysvinit_3.2.16.1731-1.0_all.deb?dl=1 && \
    wget -O fio-util_3.2.16.1731-1.0_amd64.deb https://www.dropbox.com/s/57huby17mteg6wp/fio-util_3.2.16.1731-1.0_amd64.deb?dl=1 && \
    unzip iomemory-vsl.zip && cd /home/temp/iomemory-vsl && \
    cp -r /home/temp/iomemory-vsl/root/usr/src/iomemory-vsl-3.2.16 /usr/src/ && \
    mkdir -p /var/lib/dkms/iomemory-vsl/3.2.16/build && \
    ln -s /usr/src/iomemory-vsl-3.2.16 /var/lib/dkms/iomemory-vsl/3.2.16/source && \
    dkms build -m iomemory-vsl -v 3.2.16 && \
    dkms install -m iomemory-vsl -v 3.2.16 && \
    modprobe iomemory-vsl && \
    cd /home/temp && \
    dpkg -i fio-firmware-fusion_3.2.16.20180821-1_all.deb fio-util_3.2.16.1731-1.0_amd64.deb fio-sysvinit_3.2.16.1731-1.0_all.deb fio-common_3.2.16.1731-1.0_amd64.deb
  2. (This step applies only to Proxmox 6 instances) Copy this whole code to the Proxmox 6 console. It will automatically handle the drivers download, dependencies download, preparations and installation:
    Code:
    apt update && apt install --assume-yes pve-headers pve-headers-`uname -r` zip unzip gcc fakeroot build-essential debhelper rsync dkms && apt upgrade && apt autoremove --assume-yes && \
    mkdir /home/temp && cd /home/temp && \
    wget -O iomemory-vsl.zip https://www.dropbox.com/s/n3a03ueumnjzbp8/iomemory-vsl-5.6.0.zip?dl=1 && \
    wget -O fio-common_3.2.16.1731-1.0_amd64.deb https://www.dropbox.com/s/pd2ohfaufhwqc34/fio-common_3.2.16.1731-1.0_amd64.deb?dl=1 && \
    wget -O fio-firmware-fusion_3.2.16.20180821-1_all.deb https://www.dropbox.com/s/kcn5agi6lyikicf/fio-firmware-fusion_3.2.16.20180821-1_all.deb?dl=1 && \
    wget -O fio-sysvinit_3.2.16.1731-1.0_all.deb https://www.dropbox.com/s/g39l6lg9of6eqze/fio-sysvinit_3.2.16.1731-1.0_all.deb?dl=1 && \
    wget -O fio-util_3.2.16.1731-1.0_amd64.deb https://www.dropbox.com/s/57huby17mteg6wp/fio-util_3.2.16.1731-1.0_amd64.deb?dl=1 && \
    unzip iomemory-vsl.zip && cd /home/temp/iomemory-vsl && \
    cp -r /home/temp/iomemory-vsl/root/usr/src/iomemory-vsl-3.2.16 /usr/src/ && \
    mkdir -p /var/lib/dkms/iomemory-vsl/3.2.16/build && \
    ln -s /usr/src/iomemory-vsl-3.2.16 /var/lib/dkms/iomemory-vsl/3.2.16/source && \
    dkms build -m iomemory-vsl -v 3.2.16 && \
    dkms install -m iomemory-vsl -v 3.2.16 && \
    modprobe iomemory-vsl && \
    cd /home/temp && \
    dpkg -i fio-firmware-fusion_3.2.16.20180821-1_all.deb fio-util_3.2.16.1731-1.0_amd64.deb fio-sysvinit_3.2.16.1731-1.0_all.deb fio-common_3.2.16.1731-1.0_amd64.deb
  3. (This step applies only to Proxmox 7 instances) Copy this whole code to the Proxmox 7 console. It will automatically handle the drivers download, dependencies download, preparations and installation:
    Code:
    apt update && apt --assume-yes install zip unzip pve-headers pve-headers-`uname -r` && apt --assume-yes upgrade && apt --assume-yes autoremove && \
    mkdir /home/temp  && cd /home/temp  && \
    wget -O iomemory-vsl.zip https://www.dropbox.com/s/df06nuzvqndlvnk/iomemory-vsl-5.12.1.zip?dl=1 && \
    wget -O fio-common_3.2.16.1731-1.0_amd64.deb https://www.dropbox.com/s/pd2ohfaufhwqc34/fio-common_3.2.16.1731-1.0_amd64.deb?dl=1 && \
    wget -O fio-firmware-fusion_3.2.16.20180821-1_all.deb https://www.dropbox.com/s/kcn5agi6lyikicf/fio-firmware-fusion_3.2.16.20180821-1_all.deb?dl=1 && \
    wget -O fio-sysvinit_3.2.16.1731-1.0_all.deb https://www.dropbox.com/s/g39l6lg9of6eqze/fio-sysvinit_3.2.16.1731-1.0_all.deb?dl=1 && \
    wget -O fio-util_3.2.16.1731-1.0_amd64.deb https://www.dropbox.com/s/57huby17mteg6wp/fio-util_3.2.16.1731-1.0_amd64.deb?dl=1 && \
    unzip iomemory-vsl.zip && cd /home/temp/iomemory-vsl-5.12.1 && \
    apt update && apt --assume-yes install gcc fakeroot build-essential debhelper rsync dkms && \
    cp -r /home/temp/iomemory-vsl-5.12.1/root/usr/src/iomemory-vsl-3.2.16 /usr/src/ && \
    mkdir -p /var/lib/dkms/iomemory-vsl/3.2.16/build && \
    ln -s /usr/src/iomemory-vsl-3.2.16 /var/lib/dkms/iomemory-vsl/3.2.16/source && \
    dkms build -m iomemory-vsl -v 3.2.16 && \
    dkms install -m iomemory-vsl -v 3.2.16 && \
    modprobe iomemory-vsl && \
    cd /home/temp && \
    dpkg -i fio-firmware-fusion_3.2.16.20180821-1_all.deb fio-util_3.2.16.1731-1.0_amd64.deb fio-sysvinit_3.2.16.1731-1.0_all.deb fio-common_3.2.16.1731-1.0_amd64.deb
  4. You may need to compile the drivers for the other kernels present on the system:
    ls /var/lib/initramfs-tools | sudo xargs -n1 /usr/lib/dkms/dkms_autoinstaller start or ls /lib/modules | sudo xargs -n1 /usr/lib/dkms/dkms_autoinstaller start (Proxmox 7)
  5. You may need to reboot the OS and make sure the device is attached after the reboot by running fio-status -a
  6. Format the device according to your needs and enjoy.
I'm still waiting to see how the device holds after the kernel update, just to make sure it's production ready. Other than that, seems to work properly. I've managed to create LVM-thin pool and import the VMs on it. The IO is very subpar (topic for another thread), but i was glad that a very decent hardware may still be used with the latest OSs.

Hope this helps a couple of souls out there. And you can buy me a beer if it saved you a couple of sleepless nights ;)
 
Last edited:
Vladimir:

On step 2, I get:

Code:
root@best-pve:/home/temp/iomemory-vsl-3.2.16.1731# apt-get install gcc fakeroot build-essential debhelper rsync pve-headers-4.15.18-15-pve dkms
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package pve-headers-4.15.18-15-pve
E: Couldn't find any package by glob 'pve-headers-4.15.18-15-pve'
E: Couldn't find any package by regex 'pve-headers-4.15.18-15-pve'

Thinking I had the wrong pve-header version, I ran:

Code:
root@best-pve:/home/temp/iomemory-vsl-3.2.16.1731# uname -r
4.15.18-12-pve

And noticed 4.15.18-12-pve instead of your 4.15.18-15-pve... So I tried:

Code:
root@best-pve:/home/temp/iomemory-vsl-3.2.16.1731# apt-get install gcc fakeroot build-essential debhelper rsync pve-headers-4.15.18-12-pve dkms

and the result was:

Code:
root@best-pve:/home/temp/iomemory-vsl-3.2.16.1731# apt-get install gcc fakeroot build-essential debhelper rsync pve-headers-4.15.18-12-pve dkms
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package pve-headers-4.15.18-12-pve
E: Couldn't find any package by glob 'pve-headers-4.15.18-12-pve'
E: Couldn't find any package by regex 'pve-headers-4.15.18-12-pve'

Any suggestions?
 
Hey!

It looks like you've no free PVE repository listed. You should include this repository. Here's what you need to do:
Code:
nano /etc/apt/sources.list
Insert the following line just after the first line that is there:
Code:
deb http://download.proxmox.com/debian stretch pve-no-subscription
Afterwords run the update command:
Code:
apt update
 
Vladimir,

Awesome - thank you for the help! That cured my issue with step #2. I had no problem creating the DKMS config file in step #3. However, at step #4 I received "Error! Problems with mkinitrd detected. Automatically uninstalling this module."... the details are below:

Code:
root@best-pve:/home/temp/iomemory-vsl-3.2.16.1731# cp -r /home/temp/iomemory-vsl-3.2.16.1731/root/usr/src/iomemory-vsl-3.2.16 /usr/src/ && \
> mkdir -p /var/lib/dkms/iomemory-vsl/3.2.16/build && \
> ln -s /usr/src/iomemory-vsl-3.2.16 /var/lib/dkms/iomemory-vsl/3.2.16/source && \
> dkms build -m iomemory-vsl -v 3.2.16 && \
> dkms install -m iomemory-vsl -v 3.2.16 && \
> modprobe iomemory-vsl

Kernel preparation unnecessary for this kernel.  Skipping...

Building module:
cleaning build area...
'make' DKMS_KERNEL_VERSION=4.15.18-12-pve.....................
cleaning build area...

DKMS: build completed.

iomemory-vsl:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.15.18-12-pve/updates/dkms/

depmod...

Backing up initrd.img-4.15.18-12-pve to /boot/initrd.img-4.15.18-12-pve.old-dkms
Making new initrd.img-4.15.18-12-pve
(If next boot fails, revert to initrd.img-4.15.18-12-pve.old-dkms image)
update-initramfs.................(bad exit status: 1)

-------- Uninstall Beginning --------
Module:  iomemory-vsl
Version: 3.2.16
Kernel:  4.15.18-12-pve (x86_64)
-------------------------------------

Status: Before uninstall, this module version was ACTIVE on this kernel.

iomemory-vsl.ko:
 - Uninstallation
   - Deleting from: /lib/modules/4.15.18-12-pve/updates/dkms/
 - Original module
   - No original module was found for this module on this kernel.
   - Use the dkms install command to reinstall any previous module version.

depmod...

Backing up initrd.img-4.15.18-12-pve to /boot/initrd.img-4.15.18-12-pve.old-dkms
Making new initrd.img-4.15.18-12-pve
(If next boot fails, revert to initrd.img-4.15.18-12-pve.old-dkms image)
update-initramfs...............(bad exit status: 1)
Warning: There was a problem remaking your initrd.  You must manually remake it
before booting into this kernel.

DKMS: uninstall completed.
Error! Problems with mkinitrd detected.  Automatically uninstalling this module.
DKMS: Install Failed (mkinitrd problems).  Module rolled back to built state.
root@best-pve:/home/temp/iomemory-vsl-3.2.16.1731# touch ./root/usr/src/iomemory-vsl-3.2.16/dkms.conf && nano ./root/usr/src/iomemory-vsl-3.2.16/dkms.conf
> mkdir -p /var/lib/dkms/iomemory-vsl/3.2.16/build && \
> ln -s /usr/src/iomemory-vsl-3.2.16 /var/lib/dkms/iomemory-vsl/3.2.16/source && \
> dkms build -m iomemory-vsl -v 3.2.16 && \
> dkms install -m iomemory-vsl -v 3.2.16 && \
> modprobe iomemory-vsl

Kernel preparation unnecessary for this kernel.  Skipping...

Building module:
cleaning build area...
'make' DKMS_KERNEL_VERSION=4.15.18-12-pve.....................
cleaning build area...

DKMS: build completed.

iomemory-vsl:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.15.18-12-pve/updates/dkms/

depmod...

Backing up initrd.img-4.15.18-12-pve to /boot/initrd.img-4.15.18-12-pve.old-dkms
Making new initrd.img-4.15.18-12-pve
(If next boot fails, revert to initrd.img-4.15.18-12-pve.old-dkms image)
update-initramfs.................(bad exit status: 1)

-------- Uninstall Beginning --------
Module:  iomemory-vsl
Version: 3.2.16
Kernel:  4.15.18-12-pve (x86_64)
-------------------------------------

Status: Before uninstall, this module version was ACTIVE on this kernel.

iomemory-vsl.ko:
 - Uninstallation
   - Deleting from: /lib/modules/4.15.18-12-pve/updates/dkms/
 - Original module
   - No original module was found for this module on this kernel.
   - Use the dkms install command to reinstall any previous module version.

depmod...

Backing up initrd.img-4.15.18-12-pve to /boot/initrd.img-4.15.18-12-pve.old-dkms
Making new initrd.img-4.15.18-12-pve
(If next boot fails, revert to initrd.img-4.15.18-12-pve.old-dkms image)
update-initramfs...............(bad exit status: 1)
Warning: There was a problem remaking your initrd.  You must manually remake it
before booting into this kernel.

DKMS: uninstall completed.
Error! Problems with mkinitrd detected.  Automatically uninstalling this module.
DKMS: Install Failed (mkinitrd problems).  Module rolled back to built state.
root@best-pve:/home/temp/iomemory-vsl-3.2.16.1731#

I proceeded on to step #5, rebooted, and issued "fio-status" at the command line and get this:

Code:
root@best-pve:~# fio-status

Found 1 ioMemory device in this system
Driver version: Driver not loaded

Adapter: ioMono
        Fusion-io 1.65TB ioScale2, Product Number:F11-003-1T65-CS-0001, SN:1312G1633, FIO SN:1312G1633
        External Power: NOT connected
        PCIe Power limit threshold: Disabled
        Connected ioMemory modules:
          43:00.0:      Product Number:F11-003-1T65-CS-0001, SN:1312G1633

43:00.0 ioDrive2 Adapter Controller, Product Number:F11-003-1T65-CS-0001, SN:1312G1633
        PCI:43:00.0
        Firmware v7.1.17, rev 116786 Public
        Internal temperature: 84.16 degC, max 85.64 degC

root@best-pve:~#

So I know I'm getting close, but in the PVE webUI at best-pve>Disks, I do not see the Fusion-IO drive...
 
Hey!

It seems to be an issue specific to your instance only.
  1. Have you updated Proxmox to the latest version?
  2. Is it a clean Proxmox installation, or have you installed it atop of the Debian?
  3. Is there enough free space on the root drive (where Proxmox is)?
  4. Have you changed permissions for the files and folders of the OS or are they Proxmox default?
If you want, i can try connecting remotely and have a look at it. But i'd strongly suggest you updating Proxmox kernel / headers first.
 
Vladimir:

Ok, because I wanted to document my steps in creating this machine, I went back and completely reinstalled PVE 5.4 from scratch on the R720xd.

This time around, my SD Card (local) is 71% full after the entire steps completed above. I am only focusing on installation of the Fusion-IO card and no VMs at this point...

At step #4 above, this time I received no error like before (Problems with mkinitrd detected. Automatically uninstalling this module.) I captured all of the CLI output for reference.

Upon reboot and issuing "fio-status" I receive:

Code:
Found 1 ioMemory device in this system
Driver version: Driver not loaded

Adapter: ioMono
        Fusion-io 1.65TB ioScale2, Product Number:F11-003-1T65-CS-0001, SN:1312G1633, FIO SN:1312G1633
        External Power: NOT connected
        PCIe Power limit threshold: Disabled
        Connected ioMemory modules:
          43:00.0:      Product Number:F11-003-1T65-CS-0001, SN:1312G1633

43:00.0 ioDrive2 Adapter Controller, Product Number:F11-003-1T65-CS-0001, SN:1312G1633
        PCI:43:00.0
        Firmware v7.1.17, rev 116786 Public
        Internal temperature: 89.08 degC, max 89.57 degC

When I go to >Disks I see all of my SAS drives, the SD card for the OS, but do not see the Fusion IO drive...

I see a note about "External Power: NOT connected"... Does this card require an external power cable in addition to being plugged into the PCIe header?!?
 
I am really afraid to assume something, but it may be as stupid as some copy-paste problem from the forum (e.g. smart quotes, etc).
Let me know if it's possible to connect remotely and i can try to fix it.
 
First of all, a BIG THANK YOU to Vladimir for saving everybody in this forum the lengthy troubleshooting conversation by starting a private conversation with me regarding his tutorial on the Fusion-IO. I wanted to post a quick reply regarding my actual issue with the Fusion-IO Scale 2 and the PowerEdge R720xd.

I have learned that the Fusion-IO Scale 2 puts off some serious heat! By the time I completed Vladimir's tutorial on building the drivers, my card had already reached 80C. Because my Fusion-IO card came with a half-height plate, I naturally installed it in a half-height slot in my R720xd - Slot #3. This slot is directly over the iDRAC chip on the mainboard, so it is already a hot-spot inside the server. I then moved it to Slot #5, which provided some air space between the 4 port NIC and the empty Slot #4 above it. The logic here was that there would be airflow on both sides of the card. There was a slight improvement in temperatures, but the card still began thermal limiting. I found additional documentation indicating the card should be installed in Slot #4 in a R720xd, presumably for the x16 link width. I had similar temperature issues there as well.

SO, I got the bright idea to remove the server lid, place a dielectric weight on the "intrusion switch" so the fans would not rev, and give it another try. Now my Fusion-IO card stays at 55C, the drivers load with no problem at reboot, no thermal limiting - problem solved.

So for those with a Fusion-IO card slated for a R720xd, be prepared to deal with the heat. There are ways in iDRAC and possibly BIOS to speed up the fans, but I prefer to keep the server quiet. At full speed, the fans sound like a jet taking off. I have ordered a PCI fan card that I will place in Slot #5 directly below Slot #4 and hopefully it can keep the Fusion-IO card cool with the lid on. I will post back later with my results.

Vladimir's tutorial above is spot-on and worked great; even for a Linux novice like me. If you benefit from his tutorial above, please send him a beer donation! He brings value to the PVE community.
 
I may well be doing something wrong, but upgrading to Proxmox 6.0 (kernel 5.0) breaks this on my server. I tested using a VM (passing through my ioDrive) with a clean Proxmox 5.4 install. When I followed your (excellent!) guide, everything installed fine. Then I did a dist-upgrade to the version that was released today and the driver would not initialize any longer. An attempt at rebuilding the driver with your instructions on Proxmox 6.0 failed with:
Code:
/var/lib/dkms/iomemory-vsl/3.2.16/build/ktime.c: In function ‘fusion_getwallclocktime’:
/var/lib/dkms/iomemory-vsl/3.2.16/build/ktime.c:127:26: error: implicit declaration of function ‘current_kernel_time’; did you mean ‘current_time’? [-Werror=implicit-function-declaration]
     struct timespec ts = current_kernel_time();
                          ^~~~~~~~~~~~~~~~~~~
                          current_time
/var/lib/dkms/iomemory-vsl/3.2.16/build/ktime.c:127:26: error: invalid initializer
cc1: some warnings being treated as errors
make[2]: *** [scripts/Makefile.build:286: /var/lib/dkms/iomemory-vsl/3.2.16/build/ktime.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [Makefile:1606: _module_/var/lib/dkms/iomemory-vsl/3.2.16/build] Error 2
make[1]: Leaving directory '/usr/src/linux-headers-5.0.15-1-pve'
make: *** [Makefile:82: modules] Error 2
Some functions that are used in the driver build process were removed or changed in the latest kernel iterations. Not sure if we should hold our breath for Sandisk/WD to release new drivers that are compatible with kernel 5.0...
 
Hey!

You're not doing anything wrong. There are no drivers that work for Debian 10 (on which Proxmox 6 is based), therefore the card stops working. Please help the cause by joining the Western Digital support center here: https://portal.wdc.com/Support/s/login/?startURL=/Support/s/&ec=302

Create a ticket requesting the driver update for Debian 10. The update sounds more difficult than it is - AFAIK there are some kernel offsets that need to be updated and it should not be difficult nor take a long time. I assume that if they start getting requests on a regular basis, they will allocate resources for it.

And, please, actually do that. It won't take more than 5-10 minutes, but it will put the Fusion-io cards back into the spotlight. Right now, as someone wrote, Fusion-io are like a bastard that came under WD wing after acquiring Sandisk and they seem quite frustrated by the fact that all these cards tend to have endurance of 8, 17 or even 95PBW :D Basically some of these cards may live way longer than the WD company itself :P
 
Good point :D. I just created a ticket, let us hope for the best. I guess that the possibility of any driver update will depend on SLES/RHEL adopting the 5.0 kernel.
 
  • Like
Reactions: Vladimir Bulgaru
Hello I keep getting this error even after copying the file in question over to the directory that its says in the error. Any ideas?

Building module:
cleaning build area...
'make' DKMS_KERNEL_VERSION=4.15.18-12-pve........
Error! Build of iomemory-vsl.ko failed for: 4.15.18-12-pve (x86_64)
Consult the make.log in the build directory
/var/lib/dkms/iomemory-vsl/4.3.5/build/ for more information.

Building modules, stage 2.
MODPOST 1 modules
WARNING: could not find /var/lib/dkms/iomemory-vsl/4.3.5/build/kfio/.x86_64_cc63_libkfio.o.cmd for /var/lib/dkms/iomemory-vsl/4.3.5/build/kfio/x86_64_cc63_libkfio.o
CC /var/lib/dkms/iomemory-vsl/4.3.5/build/iomemory-vsl4.mod.o
LD [M] /var/lib/dkms/iomemory-vsl/4.3.5/build/iomemory-vsl4.ko
make[1]: Leaving directory '/usr/src/linux-headers-4.15.18-12-pve'
 
Hey!

Can you please clarify:
  1. What Fusion-io card you're trying to install? PX600 1.3TB
  2. What is the Proxmox version? 5.4
  3. What is your hardware setup? ASUS Z9PE-D16 motherboard, 256GB RAM, Dual Intel E5-2690, Intel X520 Dual 10Gbe, Highpoint 2760A 24-port RAID card

See my replies inline. Thank you for your quick response.
 
That’s awesome! I’m going to upgrade to PVE 6 too. And build from source. Thank you so much!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!