yarcod

New Member
Sep 30, 2020
15
1
3
35
Hi!

I'm quite new to Proxmox, although I have some Linux experience previously, and I haven't found any working way to access my existing ZFS pool from an OpenMediaVault VM. Ideally, I'd like OMV to get access to the disk space directly without having to do a network share; this is purely based on what I have read as best practice, nothing concrete.

I've tried both sharing the ZFS pool over iSCSI to OMV, but this have failed. Even though I have followed the guide https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI, I haven't been able to get any connection whatsoever there. I think this would be my go-to solution, where a separate bridge would limit access to the OMV VM only, and then let OMV handle all network sharing.

Another approach was to pass-through the LSI 9211-8i HBA to OMV, since this is what I used in my previous setup (ESXi+FreeNAS). Likewise, I never got this running either due to IOMMU not being found. Following https://pve.proxmox.com/wiki/PCI(e)_Passthrough didn't help. I don't know why this is not working -- the hardware certainly supports it.

My question(s) is: what would be the recommended approach here? What could I be missing, and where do I continue to troubleshoot? When the guides are not working I'm mostly at a loss.

Thanks in advance!
 

Stefan_R

Proxmox Retired Staff
Retired Staff
Jun 4, 2019
1,300
279
88
Vienna
Not sure I understand your setup correctly, but I believe what you mean is that you have a ZFS pool on your PVE instance, and and OMV virtual machine running on top of that. And you want to access that zpool from within the VM.

If so, why not just use what you're given: Add a new Hard Disk to your VM, set the backing storage to your ZFS pool and make it as big as the zpool itself?

If you want direct raw access to the disks, you can also pass them through directly (see https://johnkeen.tech/proxmox-physical-disk-to-vm-only-2-commands/ for example), or do the PCIe passthrough you suggested (for the IOMMU not found error: you need to enable the IOMMU manually in either /etc/default/grub or /etc/kernel/cmdline, depending on if you're using ZFS on UEFI or not - this is described in the docs here).
 

yarcod

New Member
Sep 30, 2020
15
1
3
35
Thanks for your reply!

Sorry if my explanation was a little fuzzy, but you got it all right! OMV in turn will handle network sharing etc.

If so, why not just use what you're given: Add a new Hard Disk to your VM, set the backing storage to your ZFS pool and make it as big as the zpool itself?
I cannot get that working, because I get the error:
zfs error: cannot create 'Tank2/vm-100-disk-0': out of space at /usr/share/perl5/PVE/API2/Qemu.pm line 1340. (500). According to this thread this mostly happens when trying to set a size larger than the usable space in the zpool, and mostly comes with striped disks. The recommendations have been to use mirrored disks, which I am already using -- still the same problem. Available space is 1,76 TB as seen in Proxmox, but giving that or slightly lower throws the error.

If you want direct raw access to the disks, you can also pass them through directly
Won't this prohibit me from reading and handling ZFS in the VM? This is, however, something I haven't tested. I will test this method as well!

or do the PCIe passthrough you suggested (for the IOMMU not found error: you need to enable the IOMMU manually in either /etc/default/grub or /etc/kernel/cmdline, depending on if you're using ZFS on UEFI or not - this is described in the docs here).
Yes, I have followed the docs and still end up without IOMMU enabled. When entering BIOS setup, the only option is to boot in "Dual Mode" (BIOS and UEFI), so I cannot set which one I use. As such, I have edited both /etc/default/grub and /etv/kernel/cmdline to no avail. Is it an issue to have both edited? I suppose not?
 

yarcod

New Member
Sep 30, 2020
15
1
3
35
...or do the PCIe passthrough you suggested ...
I ended up re-trying the pass-through solution from scratch and for some unfathomable reason it worked this time. I don't know what I did wrong before, but everything looks to be working from a Proxmox perspective now. Thank you for your help! :)
 
  • Like
Reactions: Stefan_R

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!