ZFS shared storage over FC

Spiros Pap

Well-Known Member
Aug 1, 2017
87
1
48
44
Hi all,

Since proxmox supports shared ZFS over iscsi, I believe it is possible to have a ZFS shared LUN over FC.
As far as I can see, the GUI only supports ZFS over iSCSI.

Is there a way to manually configure a shared ZFS LUN over FC?

Thanx,
Spiros
 
Hi,

if you use FC you have only import/create the zfs pool locally.
When you like to use multipath you have to install the multipath-tools packages and configure it.

see
https://pve.proxmox.com/wiki/ZFS_on_Linux
 
I have installed multipathd and using it. I have one dm-xx device that is the multipathed version of the 4 paths I have.
I can see this dm-xx on all the nodes.

While I can create an LVM on this dm-xx, I can't create a shared ZFS. I'm just curious what is preventing that, when ZFS over iscsi can be shared.

Thanx,
Sp
 
Ok, now I understand what you are trying to achieve.
We have no storage plugin what can do this,

The ZFS over iSCSI is not a generic plugin it works only with a limited set of SAM boxes (like FreeNAS, Nexenta).
We parse the iSCSI to qemu what can use iSCSI directly.

With FC you have 'local' mapped device and Qemu will assume this is local resources when you use these devices.
So you are not able to migrate your VM online.

You could use these devices as a raw device in the VM.
For this, you have to edit your config manually.
Code:
scsi<n>: <path to dev>

What you can do is to write your own custom storage plugin.
See
https://pve.proxmox.com/pipermail/pve-devel/2016-August/022358.html
https://github.com/odiso/proxmox-pve-storage-netapp
 
Ok, now I understand what you are trying to achieve.
We have no storage plugin what can do this,

The ZFS over iSCSI is not a generic plugin it works only with a limited set of SAM boxes (like FreeNAS, Nexenta).
We parse the iSCSI to qemu what can use iSCSI directly.

With FC you have 'local' mapped device and Qemu will assume this is local resources when you use these devices.
So you are not able to migrate your VM online.

You could use these devices as a raw device in the VM.
For this, you have to edit your config manually.
Code:
scsi<n>: <path to dev>

What you can do is to write your own custom storage plugin.
See
https://pve.proxmox.com/pipermail/pve-devel/2016-August/022358.html
https://github.com/odiso/proxmox-pve-storage-netapp

Well... if you make (local) zfs storage with same name on both cluster nodes it will make zfs transfer from one node to another and you still have live migration. This works with local storages (or shared storage which gives separate luns for each cluster node). But this does not unfortunately solve the real multipathed fc lun shared storage problem what op had which I'm also facing now :(. Any solutions-updates to the matter?
 
Yes, I just tried this week via storagecli to have ZFS-over-FC, but I was not able to see any LUNs on the client side. In general, it should work, but I do not know why not. Unfortunately, there is not that much information available on storagecli via FC, so I do not know how to process here.
 
So, I got the multipathed configuration to work with targetcli, QLogic HBAs and exporting a ZFS zvol from one node to the fabric.
 
Hi @LnxBil. Can you please elaborate how did you accomplish FC + Multipath + ZFS, or provide reference links that helped you out. I believe that with this configuration, snapshots are working, right?
Thank you very much.
Gus
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!