Shared storage recommendation?

stuartbh

Active Member
Dec 2, 2019
113
9
38
59
ProxMoxers,

Let me first say that this is a lab environment and that resiliency is not so much my goal as the ability to migrate VMs (preferably live though shutdown is okay too) betwixt two pve servers in a cluster. I have two Drobo devices that I want to use and am wondering what is recommended or works well?

I am thinking that I'd like to get ProxMox to connect to them (ProxMox being the initiator and the Drobos being targets) and then use ZFS as the filesystem. I am also considering to setup an openmediavault server eventually, but have not obtained a box yet for that purpose and I'd prefer to use the Drobos directly I think.

If I do this, then I would also need to be sure that I know where to put the iSCSI login commands in the boot up process for ProxMox so they become available before ProxMox tries to start the VMs and such.

Thanks to everyone in advance!

Stuart
 
I am thinking that I'd like to get ProxMox to connect to them (ProxMox being the initiator and the Drobos being targets) and then use ZFS as the filesystem.
That's not going to work in a cluster, ZFS is not built for that. Just use the Drobos as targets and provide LUNs for each VM or just install LVM. Please read this.
 
That's not going to work in a cluster, ZFS is not built for that. Just use the Drobos as targets and provide LUNs for each VM or just install LVM. Please read this.

I have not thought about this in a while, although perhaps a different solution might work. If I were to use (for example) OpenMediaVault (or I suppose TrueNAS SCALE) to connect to the Drobos via iSCSI, ZFS them by making them into a vdev and pool and then enable the NFS sharing option on ZFS then ProxMox could connect to them via NFS and use them to hold qcow2 files. What say about such a scheme?

Stuart
 
Last edited:
I have not thought about this in a while, although perhaps a different solution might work. If I were to use (for example) OpenMediaVault (or I suppose TrueNAS SCALE) to connect to the Drobos via iSCSI, ZFS them by making them into a vdev and pool and then enable the NFS sharing option on ZFS then ProxMox could connect to them via NFS and use them to hold qcow2 files. What say about such a scheme?
Sure, you can always built a bigger software stack. If you go that route with ZFS on iSCSI, you can then also just go with ZFS-over-iSCSI (over iSCSI) and get all ZFS features without adding another QCOW2 layer. You can also use the QCOW2 on top of that. I also use such a setup of a test environment in PVE in another PVE cluster that is on FC-shared storage. It'll work, but it'll not be fast.
 
Sure, you can always built a bigger software stack. If you go that route with ZFS on iSCSI, you can then also just go with ZFS-over-iSCSI (over iSCSI) and get all ZFS features without adding another QCOW2 layer. You can also use the QCOW2 on top of that. I also use such a setup of a test environment in PVE in another PVE cluster that is on FC-shared storage. It'll work, but it'll not be fast.
ZFS over iSCSI (as is in ProxMox) requires the ability to ssh into the array, correct? I am not sure (will have to check) if the Drobo has this capability. If I can eliminate the need for the QCOW2 layer, all the better. Speed is not a huge deal to me as it is a test/lab environment for me as well. The one thing is QCOW2 does allow for greater portability and it also offers the snapshotting too.

It seems the Drobo might be able to be configured for SSH access, if so, is there any kind of test scrip that I can run to assure it supports all the commands that ProxMox would need to use ZFS over iSCSI?


Stuart
 
Last edited:
It seems the Drobo might be able to be configured for SSH access, if so, is there any kind of test scrip that I can run to assure it supports all the commands that ProxMox would need to use ZFS over iSCSI?
There are no test scripts that can tell you if the solution is compatible. The zfs-over-iscsi is described here https://pve.proxmox.com/wiki/Storage:_ZFS_over_ISCSI

As you can see only specific iSCSI target implementations are supported out of the box. In addition, as you already noted - you must be able to login via ssh as root. The system must also provide full ZFS configuration access, via standard ZFS tool set.
I suspect the universe of people using Drobo with Proxmox with ZFS/iSCSI is very small if any. So you will have to try and report back.

If things dont work, you can always fork the ZFS/iSCSI plugin and try to modify it to work with Drobo.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
There are no test scripts that can tell you if the solution is compatible. The zfs-over-iscsi is described here https://pve.proxmox.com/wiki/Storage:_ZFS_over_ISCSI

As you can see only specific iSCSI target implementations are supported out of the box. In addition, as you already noted - you must be able to login via ssh as root. The system must also provide full ZFS configuration access, via standard ZFS tool set.
I suspect the universe of people using Drobo with Proxmox with ZFS/iSCSI is very small if any. So you will have to try and report back.

If things dont work, you can always fork the ZFS/iSCSI plugin and try to modify it to work with Drobo.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
bbgeek17,

I'd be fine with looking at modifying the plugin, where do I get the source for it and instructions for compiling it (if required)?

If I am understanding you correctly then, the Drobo needs only to speak iSCSI and not itself support ZFS as once ProxMox has iSCSI access to the Drobo it uses its own ZFS implementation to create what it needs in terms of shared storage, correct?


Stuart
 
I'd be fine with looking at modifying the plugin, where do I get the source for it and instructions for compiling it (if required)?
you can find the plugins here: https://github.com/proxmox/pve-storage/tree/master/PVE/Storage

If I am understanding you correctly then, the Drobo needs only to speak iSCSI
no, you misunderstand the order of operations. The storage system _MUST_ use ZFS internally to provision slices, it then must use supported iSCSI implementation to expose those raw slices as iSCSI LUNs. What you do with those raw block iSCSI luns when they are connected to Proxmox is completely separate and up to you.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
bbgeek17,


Ah, I see said the blind man as he picked up the hammer and saw!

The Drobos themselves (at least the models I have, b800i) do not support ZFS. It will allow iSCSI connectivity to them however. Thus, I am impelled to think that if TrueNAS can connect to them as iSCSI targets and then use that space to create pools and present those pools via NFS or SMB that is likely the best answer for me to make use of them on ProxMox.

Stuart
 
I thought you would do a storage VM:
use iSCSI from the Drobo in PVE (as LVM) and create only ONE VM in that storage and create the ZFS on that LUN, aftewards export the ZFS pool via ZFS-over-iSCSI to your PVE host(s) so that any other VMs have full ZFS support. With such a setup you would create a storage VM that could be live migrated and would yield ZFS. You could also create just a NFS instead of ZFS and use it for QCOW2 files.
 
All,


Well my original thinking was that I wanted to use ZFS directly (with RAIDZ) on the Drobos as the drives are a bit older and that would also have given me the ability to have snapshots too. However, the Drobos (its been a while since I played with them) do not really allow for direct access to the drives via iSCSI and so I setup a volume on one Drobo that was unformatted. I then was able to setup ProxMox to iSCSI connect to the Drobo and set it up for LVM via ProxMox. Thereafter, I setup an OpenMediaVault storage VM for the time being. I think in the end I will fully assign the Drobo to be used by ProxMox Backup Server and will eventually setup a TrueNAS server to host my VM and other file sharing needs. Redundancy of file servers is not a huge deal to me, I am more concerned about resiliency of my older drives and using ZFS to assure their persistent nature.

Whence I installed two of my four cluster members I did so by adjusting the default space settings and this caused there to be a lack of a "local-lvm" on either of those clusters. Initially this seemed inconsequential, however, it seems that the absence of a "local-lvm" on the cluster member can in some cases break the ability to migrate VMs around. As such, I am going in the near future to reinstall both of those cluster members.

For some reason after setting up PVE to iSCSI connect to the Drobo one of the cluster members had a question mark in front of the connection until I finally rebooted it and then it seemed happy.

The storage VM I created (for now) is not using ZFS but will just do simple NFS filesharing and use QCOW2 files. They of course can easily be migrated to other CIFS or NFS storage in the future and seem the easiest mechanism for ProxMox to deal with for live migration of VMs also.

Stuart
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!