Using Fusion-io Cards

pjkenned

New Member
Dec 16, 2013
26
1
3
I tried installing the Fusion-io VSL drivers using both the pre-compiled .deb binaries for wheezy and also have tried compiling my own. I can see the cards via fio-status but using modprobe iomemory-vsl does keeps giving me an Exec format error.

I wanted to see if anyone here has an idea how to get these cards working. They are really reasonable (~$0.50-$0.60/GB now) and low latency devices.

Code:
root@intel-e5-2699-v3:~/ioDrive/Linux_debian-wheezy/3.2.8/SoftwareSource# modprobe iomemory-vsl
ERROR: could not insert 'iomemory_vsl': Exec format error
root@intel-e5-2699-v3:~/ioDrive/Linux_debian-wheezy/3.2.8/SoftwareSource# fio-status


Found 2 ioMemory devices in this system
Driver version: Driver not loaded


Adapter: Single Adapter
        Fusion-io ioDrive 353GB, Product Number:FS1-003-353-CS, SN:2345, FIO SN:2345
        Connected ioMemory modules:
          81:00.0:      Product Number:FS1-003-353-CS, SN:2345


81:00.0 ioDrive 353GB, Product Number:FS1-003-353-CS, SN:2345
        Located in slot 0 Center of Pseudo Low-Profile ioDIMM Adapter SN:2345
        PCI:81:00.0
        Firmware v7.1.17, rev 116786 Public
        Internal temperature: 54.63 degC, max 55.12 degC


Adapter: Single Adapter
        Fusion-io ioDrive 353GB, Product Number:FS1-003-353-CS, SN:1234, FIO SN:1234
        Connected ioMemory modules:
          82:00.0:      Product Number:FS1-003-353-CS, SN:1234


82:00.0 ioDrive 353GB, Product Number:FS1-003-353-CS, SN:1234
        Located in slot 0 Center of Pseudo Low-Profile ioDIMM Adapter SN:1234
        PCI:82:00.0
        Firmware v7.1.17, rev 116786 Public
        Internal temperature: 55.61 degC, max 55.61 degC

Made a thread here with what I have tried thus far.

Has anyone managed to get ioDrive cards working?
 

udo

Famous Member
Apr 22, 2009
5,912
174
83
Ahrensburg; Germany
I tried installing the Fusion-io VSL drivers using both the pre-compiled .deb binaries for wheezy and also have tried compiling my own. I can see the cards via fio-status but using modprobe iomemory-vsl does keeps giving me an Exec format error.

I wanted to see if anyone here has an idea how to get these cards working. They are really reasonable (~$0.50-$0.60/GB now) and low latency devices.

Code:
root@intel-e5-2699-v3:~/ioDrive/Linux_debian-wheezy/3.2.8/SoftwareSource# modprobe iomemory-vsl
ERROR: could not insert 'iomemory_vsl': Exec format error
root@intel-e5-2699-v3:~/ioDrive/Linux_debian-wheezy/3.2.8/SoftwareSource# fio-status


Found 2 ioMemory devices in this system
Driver version: Driver not loaded


Adapter: Single Adapter
        Fusion-io ioDrive 353GB, Product Number:FS1-003-353-CS, SN:2345, FIO SN:2345
        Connected ioMemory modules:
          81:00.0:      Product Number:FS1-003-353-CS, SN:2345


81:00.0 ioDrive 353GB, Product Number:FS1-003-353-CS, SN:2345
        Located in slot 0 Center of Pseudo Low-Profile ioDIMM Adapter SN:2345
        PCI:81:00.0
        Firmware v7.1.17, rev 116786 Public
        Internal temperature: 54.63 degC, max 55.12 degC


Adapter: Single Adapter
        Fusion-io ioDrive 353GB, Product Number:FS1-003-353-CS, SN:1234, FIO SN:1234
        Connected ioMemory modules:
          82:00.0:      Product Number:FS1-003-353-CS, SN:1234


82:00.0 ioDrive 353GB, Product Number:FS1-003-353-CS, SN:1234
        Located in slot 0 Center of Pseudo Low-Profile ioDIMM Adapter SN:1234
        PCI:82:00.0
        Firmware v7.1.17, rev 116786 Public
        Internal temperature: 55.61 degC, max 55.61 degC

Made a thread here with what I have tried thus far.

Has anyone managed to get ioDrive cards working?
Hi,
the precompiled can't work, because pve use an own kernel and not the wheezy-kernel.

The questiion is, why your selfcompiled version don't work?

Are your header-files actual?
Fit your header files and installed kernel to the running kernel (perhaps forget an reboot after the last kernel update)?

Udo
 

pjkenned

New Member
Dec 16, 2013
26
1
3
Hi,
the precompiled can't work, because pve use an own kernel and not the wheezy-kernel.

The questiion is, why your selfcompiled version don't work?

Are your header-files actual?
Fit your header files and installed kernel to the running kernel (perhaps forget an reboot after the last kernel update)?

Udo

Yea it is a bit crazy. I just re-installed proxmox on the machine and will be giving it another try this afternoon. I am keeping notes on the STH thread. Would be awesome if I can get this to work since it is cheap/ high performance storage.
 

mir

Famous Member
Apr 14, 2012
3,553
112
83
Copenhagen, Denmark
Maybe your compiled drivers was installed along side the precompiled drivers and the kernel was still using these. Also, after installing compiled drivers you need to register the drivers running depmod.
 

pjkenned

New Member
Dec 16, 2013
26
1
3
Thank you @mir - that is what I was thinking too, so I decided to go back to a clean PVE install. Will update with what I find.
 

pjkenned

New Member
Dec 16, 2013
26
1
3
Maybe your compiled drivers was installed along side the precompiled drivers and the kernel was still using these. Also, after installing compiled drivers you need to register the drivers running depmod.

Even after a fresh install and re-trying no luck.
 

mir

Famous Member
Apr 14, 2012
3,553
112
83
Copenhagen, Denmark
1) Get latest source which is 3.2.10.
2) Unpack and cd iomemory-vsl-3.2.10.1509
3) dpkg-buildpackage
4) cd .. && sudo dpkg -i iomemory-vsl-2.6.32-37-pve_3.2.10.1509-1.0_amd64.deb iomemory-vsl-config-2.6.32-37-pve_3.2.10.1509-1.0_amd64.deb
5) get fio-utils and fio-firmware-fusion 3.2.10
6) sudo dpkg -i fio-*

After this:
7) sudo modprobe iomemory-vsl

Code:
filename:       /lib/modules/2.6.32-37-pve/extra/fio/iomemory-vsl.ko
license:        Proprietary
srcversion:     26FD9919E3513A5C620DFC2
alias:          pci:v00001AEDd00002001sv*sd*bc*sc*i*
alias:          pci:v00001AEDd00001008sv*sd*bc*sc*i*
alias:          pci:v00001AEDd00001007sv*sd*bc*sc*i*
alias:          pci:v00001AEDd00001006sv*sd*bc*sc*i*
alias:          pci:v00001AEDd00001005sv*sd*bc*sc*i*
alias:          pci:v00001AEDd00001004sv*sd*bc*sc*i*
alias:          pci:v00001AEDd00001003sv*sd*bc*sc*i*
alias:          pci:v00001AEDd00001001sv*sd*bc*sc*i*
alias:          pci:v00001AEDd00001000sv*sd*bc*sc*i*
depends:        
vermagic:       2.6.32-37-pve SMP mod_unload modversions 
parm:           strict_sync:Force strict write flushing on early non-powercut safe cards. (1=enable, 0=disable, -1=auto) Do not change for newer cards. (int)
parm:           disable_msix:N/A (int)
parm:           bypass_ecc:N/A (int)
parm:           groomer_low_water_delta_hpcnt:The proportion of logical space free over the runway that represents 'the wall' (int)
parm:           disable_scanner:For use only under the direction of Customer Support. (int)
parm:           use_command_timeouts:Use the command timeout registers (int)
parm:           use_large_pcie_rx_buffer:If true, use 1024 byte PCIe rx buffer. This improves performance but causes NMIs on some specific hardware. (int)
parm:           rsort_memory_limit_MiB:Memory limit in MiBytes for rsort rescan. (int)
parm:           capacity_warning_threshold:If the reserve space is below this threshold (in hundredths of percent), warnings will be issued. (int)
parm:           enable_unmap:Enable UNMAP support. (int)
parm:           fio_dev_wait_timeout_secs:Number of seconds to wait for device file creation before continuing. (int)
parm:           iodrive_load_eb_map:For use only under the direction of Customer Support. (int)
parm:           use_new_io_sched:N/A (int)
parm:           groomer_high_water_delta_hpcnt:The proportion of logical space over the low watermark where grooming starts (in ten-thousandths) (int)
parm:           preallocate_mb:The megabyte limit for FIO_PREALLOCATE_MEMORY. This will prevent the driver from potentially using all of the system's non-paged memory. (int)
parm:           exclude_devices:List of cards to exclude from driver initialization (comma separated list of <domain>:<bus>:<slot>.<func>) (string)
parm:           parallel_attach:For use only under the direction of Customer Support. (int)
parm:           enable_ecc:N/A (int)
parm:           rmap_memory_limit_MiB:Memory limit in MiBytes for rmap rescan. (int)
parm:           expected_io_size:Timeout for data log compaction while shutting down. (int)
parm:           tintr_hw_wait:N/A (int)
parm:           iodrive_load_midprom:Load the midprom (int)
parm:           auto_attach_cache:Controls directCache behavior after an unclean shutdown: 0 = disable (cache is discarded and manual rebinding is necessary), 1 = enable (default). (int)
parm:           use_modules:Number of NAND modules to use (int)
parm:           make_assert_nonfatal:For use only under the direction of Customer Support. (int)
parm:           enable_discard:For use only under the direction of Customer Support. (int)
parm:           max_md_blocks_per_device:For use only under the direction of Customer Support. (int)
parm:           force_soft_ecc:Forces software ECC in all cases (int)
parm:           max_requests:How many requests pending in iodrive (int)
parm:           global_slot_power_limit_mw:Global PCIe slot power limit in milliwatts. Performance will be throttled to not exceed this limit in any PCIe slot. (int)
parm:           read_pipe_depth:Max number of read requests outstanding in hardware. (int)
parm:           force_minimal_mode:N/A (int)
parm:           auto_attach:Automatically attach drive during driver initialization: 0 = disable attach, 1 = enable attach (default). Note for Windows only: The driver will only attach if there was a clean shutdown, otherwise the fiochkdrv utility will perform the full scan attach, 2 = Windows only: Forces the driver to do a full rescan (if needed). (int)
parm:           external_power_override:Override external power requirement on boards that normally require it. (comma-separated list of adapter serial numbers) (string)
parm:           compaction_timeout_ms:Timeout in ms for data log compaction while shutting down. (int)
parm:           disable_groomer:For use only under the direction of Customer Support. (int)
parm:           include_devices:Whitelist of cards to include in driver initialization (comma separated list of <domain>:<bus>:<slot>.<func>) (string)
parm:           preallocate_memory:Causes the driver to pre-allocate the RAM it needs (string)
parm:           fio_dev_optimal_blk_size:Optimal block size hint for the linux block layer. (int)
parm:           disable_msi:N/A (int)
parm:           scsi_queue_depth:The queue depth that is advertised to the OS SCSI interface. (int)
parm:           numa_node_forced_local:Only schedule fio-wq completion threads for use on NUMA node local to fct-worker (int)
parm:           numa_node_override:Override device to NUMA node binding (array of charp)
parm:           use_workqueue:int

$ lsmod |grep iomemory
iomemory_vsl 1228723 0
 

pjkenned

New Member
Dec 16, 2013
26
1
3
@mir - thank you again for all of your help!

Here is what I did: first, re-install Proxmox VE 3.4 fresh

Then similar to the link you provided:
Downloaded the 3.2.10 files
Edited the repos so I could install build tools
Did apt-get install gcc fakeroot build-essential debhelper rsync pve-headers-2.6.32-37-pve -y
Did the tar and build
Installed the vsl driver

I did exactly what you had.

To install iomemory-vsl-config-2.6.32-37-pve_3.2.10.1509-1.0_amd64.deb I needed to apt-get install lsb-release and then also dpkg -i iomemory-vsl-source_3.2.10.1509-1.0_amd64.deb in order to install on proxmox.

When I got to:
root@intel-e5-2699v3:~# modprobe iomemory-vslERROR: could not insert 'iomemory_vsl': Exec format error

I did a reboot and the same thing happened.
root@intel-e5-2699v3:~# modprobe iomemory-vsl
ERROR: could not insert 'iomemory_vsl': Exec format error

And of course:
root@intel-e5-2699v3:~# lsmod |grep iomemory
root@intel-e5-2699v3:~#



I have these cards working in Windows Server 2012 R2/ Hyper-V Server 2012 R2, Ubuntu 14.04, and ESXi 5.5 so I know the cards work. Something seems strange with the PVE install.


Such a bummer!
 

pjkenned

New Member
Dec 16, 2013
26
1
3
Ha! Thank you mir! Last thing I commit to Proxmox for the next gen in the datacenter. Hopefully the STH forums will fly!
 

mir

Famous Member
Apr 14, 2012
3,553
112
83
Copenhagen, Denmark
It is not possible to mail you since your mail provider refuses to receive the mail due to its size.

May 2 22:25:22 iredmail postfix/smtp[18216]: 4F384120091: to=<address removed>, relay=aspmx.l.google.com[74.125.136.27]:25, delay=1.8, delays=0.62/0.03/1.1/0, dsn=5.3.4, status=bounced (message size 39991968 exceeds size limit 35882577 of server aspmx.l.google.com[74.125.136.27])
 

pjkenned

New Member
Dec 16, 2013
26
1
3
Google handles mail. Can you post to Google drive or something similar? If you do send me a note, I can find somewhere to upload it.
 

mir

Famous Member
Apr 14, 2012
3,553
112
83
Copenhagen, Denmark
You really should find yourself a decent mail provider!

They refuse to receive mails with attached zip files but renaming the file from file.zip to file.txt will avoid bounces.

So you should have a mail in your inbox now;-)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!