[TUTORIAL] Configuring Fusion-Io (SanDisk) ioDrive, ioDrive2, ioScale and ioScale2 cards with Proxmox

Thanks but unfortunately while I was successfully able to compile the 5.1.28 branch of snuf's iomemory-vsl fork, it wont' recognize my SX350-3200 ioDrive2, I think that version of the driver is for ioDrive gen. 1 only. Right now I've reverted back to kernel 4.15.18-24-pve where I can get the stock iomemory_vsl4 4.3.6 to work without problems, while I wait for WD to port their driver to 5.x kernels.
Never had SX300/SX350/PX600 on my hands to test it. Still looking for one.
 
Can anyone at Proxmox's support staff confirm whether there are any major issues in running Proxmox V6.1 on kernel 4.15.18, while I wait for WD to port their driver to version 5.x of the kernel? For now everything seems to work smoothly but before upgrading a whole cluster I'd like to be somewhat sure about that.
 
Can anyone at Proxmox's support staff confirm whether there are any major issues in running Proxmox V6.1 on kernel 4.15.18, while I wait for WD to port their driver to version 5.x of the kernel? For now everything seems to work smoothly but before upgrading a whole cluster I'd like to be somewhat sure about that.

ZFS and Ceph (the kernel client part) are older than what we assume in our code, same goes for some newer kernel features used for LXC (but those should have fallbacks, as they only work with 5.3). there might be other issues that I am not aware of..
 
Hello colleagues!
My goal to attach my device to the my server but I can't build source code according to the first post.
Build failed on cmd:
# dkms build -m iomemory-vsl -v 3.2.16
Kernel preparation unnecessary for this kernel. Skipping...
Building module:
cleaning build area...
'make' DKMS_KERNEL_VERSION=........(bad exit status: 2)
Error! Bad return status for module build on kernel: 5.3.18-2-pve (x86_64)
Consult /var/lib/dkms/iomemory-vsl/3.2.16/build/make.log for more information.

I attached 2 files to my post: "make.log" and script to build driver "build.iodrive2.sh"

Is it possible to build it?
Could you please to tell me how to achieve my goal?

My device: iodrive2
My proxmox: Virtual Environment 6.1-7

# uname -r
5.3.18-2-pve

# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster

P.S. "build.iodrive2.sh" was renamed to "build.iodrive2.txt" according to rules of our forum
 

Attachments

  • make.log
    26.4 KB · Views: 1
  • build.iodrive2.txt
    1.8 KB · Views: 1
Hello colleagues!
My goal to attach my device to the my server but I can't build source code according to the first post.
Build failed on cmd:
# dkms build -m iomemory-vsl -v 3.2.16
Kernel preparation unnecessary for this kernel. Skipping...
Building module:
cleaning build area...
'make' DKMS_KERNEL_VERSION=........(bad exit status: 2)
Error! Bad return status for module build on kernel: 5.3.18-2-pve (x86_64)
Consult /var/lib/dkms/iomemory-vsl/3.2.16/build/make.log for more information.

I attached 2 files to my post: "make.log" and script to build driver "build.iodrive2.sh"

Is it possible to build it?
Could you please to tell me how to achieve my goal?

My device: iodrive2
My proxmox: Virtual Environment 6.1-7

# uname -r
5.3.18-2-pve

# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
Hey!

Don't worry - you can attach the card even to Proxmox 6, although the guide is for Proxmox 5. I will update the guide after some more testing, but the gist is that the only thing that needs to change are the drivers used. In this guide i recompile the drivers provided by the vendor. In order to make the card work on the latest Proxmox, you need to recompile the drivers from here: https://github.com/snuf/iomemory-vsl
 
Thank you, Vladimir
I'll try as soon as possible to rebuild driver for iodrive2 and , of course, I promise to report the result.
 
Let me know if you need help ;)

Hello Vladimir Bulgaru

I'm still can't build the driver.
So, I need a help :-(

I did it for all steps as a casual user, not a root, and provided 'sudo' if neccessary.

As posted in the git's readme file to build a Debian installation package [ from here: https://github.com/snuf/iomemory-vsl ]:

OK:
$ sudo apt-get install gcc fakeroot build-essential debhelper linux-headers-$(uname --r) rsync

OK:
download the driver source file for your target operating system at http://support.fusionio.com

Operating System / Type: Linux_debian-stretch
Version: 3.2.16

OK:
Unpack the driver source file:
$ tar zxvf iomemory-vsl_3.2.16.1731-1.0.tar.gz

OK:
Change directory to the folder created in the previous step:
$ cd iomemory-vsl-3.2.16.1731

NOT OK:
$ dpkg-buildpackage -rfakeroot -b >../build.2012-03-12.log 2>&1


What I can do for resolve my issue?
P.S. I attached file to my post: "build.2020-03-12.log"
 

Attachments

  • build.2020-03-12.log
    31.5 KB · Views: 1
Thank you colleagues.
Vladimir Bulgaru helped me rebuild the driver on the server.
At the moment, everything works flawlessly.
 
Just letting everyone know that the guide has been updated:
  1. it comes with the latest drivers
  2. it is compatible with both Proxmox 5 and Proxmox 6
  3. it has been optimised for a single copy-paste into the console
Good luck and looking forward to your feedback!
 
  • Like
Reactions: bobmc
Hello. I have installed the drivers by your guide on proxmox 6.2 (Iodrive2 1.2) and it works great. However, I have one problem:
I have set a ZFS pool on the iodrive. Sometimes I have power problem (trying to determine the source, but that's not the case). After the server has beet reset, the FIO is getting reattached. (fio-status -a shows "attaching %"). After the iodrive is back to status Online, ZFS pool still is unable to mount. I have to reboot the server manually, after that everything works flawlessy. Any ideas what do I need to do to mount that ZFS pool after reattaching iodrive?
 
Hello. I have installed the drivers by your guide on proxmox 6.2 (Iodrive2 1.2) and it works great. However, I have one problem:
I have set a ZFS pool on the iodrive. Sometimes I have power problem (trying to determine the source, but that's not the case). After the server has beet reset, the FIO is getting reattached. (fio-status -a shows "attaching %"). After the iodrive is back to status Online, ZFS pool still is unable to mount. I have to reboot the server manually, after that everything works flawlessy. Any ideas what do I need to do to mount that ZFS pool after reattaching iodrive?
Hey!

First of all, it's really difficult to find a standard solution to a non-standard situation. As far as i understand, the issue lies in the fact that after the unexpected power-down event, the Fusion-Io card goes into a self-diagnose mode that may take like 15 minutes to finalise, while the ZFS pool tries to initialise the pool on a device that has not attached yet and fails.

You can either try developing a script that runs every 1-5 minutes and checks if there are ZFS pools that are not mounted and in case there are, try attaching them again. Although, since the issue seems to be quite periodic in your case, a more reliable solution would be to get a cheap smart UPS and shutdown your server on power outage. This will be definitely a worthy investment given the cost of even one Fusion-Io card, let alone other server components.
 
Just letting everyone know that the guide has been updated:
  1. it comes with the latest drivers
  2. it is compatible with both Proxmox 5 and Proxmox 6
  3. it has been optimised for a single copy-paste into the console
Good luck and looking forward to your feedback!

Where can I find the updated tutorial? Just got a 1.3Tb Ioscale2, and want to install in a "new" supermicro X9DRi-LN4F+.

Thanks!
 
Where can I find the updated tutorial? Just got a 1.3Tb Ioscale2, and want to install in a "new" supermicro X9DRi-LN4F+.

Thanks!
Where can I find the updated tutorial? Just got a 1.3Tb Ioscale2, and want to install in a "new" supermicro X9DRi-LN4F+.

Thanks!
The one in the initial message has been updated and can be used now.
 
  • Like
Reactions: kelliston
@Vladimir Bulgaru
Do you see the fusion cards in the disks tab?
I don't, but they are working. I see their info with fio-status. I also wiped them with new empty GPT and proxmox still does not see them.
I can create zfs pool from console, but was wondering if it's normal for proxmox not to see them.
 
  • Like
Reactions: SamirD
@Vladimir Bulgaru
Do you see the fusion cards in the disks tab?
I don't, but they are working. I see their info with fio-status. I also wiped them with new empty GPT and proxmox still does not see them.
I can create zfs pool from console, but was wondering if it's normal for proxmox not to see them.
Hey!

Fusion-Io cards are not technically disks. Think of them as an external storage. The card itself has separate memory modules and a storage controller inside, so it's more like a RAID of SSDs. You won't see them in the disks tab, nor, most likely, there is no way to boot Proxmox from them (unless you do some UEFI magic).
 
Last edited:
  • Like
Reactions: SamirD

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!