ZFS Central Storage

neat idea- sure.

design as suggested- relatively simple.

Making it work in real life without connectivity loss on failover, data loss, have engineering capability for the MANY edge cases, able to actually support a production environment- practically impossible. this as a DIY will cause you pain.
 
Maybe i'm missing a benefit here, but this seems like it would be attempting to use ZFS in a way that is already implemented (better) in Ceph.

Alternatively, if you're just looking for redundant ZFS, why not simply set up a zfs-send/receive job in Cron instead of multi-homing the disk shelves?

IMHO, attempting to implement SAN architecture is a bit outdated nowadays.
 
Maybe i'm missing a benefit here, but this seems like it would be attempting to use ZFS in a way that is already implemented (better) in Ceph.

Alternatively, if you're just looking for redundant ZFS, why not simply set up a zfs-send/receive job in Cron instead of multi-homing the disk shelves?

IMHO, attempting to implement SAN architecture is a bit outdated nowadays.

Ceph requires 4-5 nodes to be a real HA setup. We have a full ceph cluster setup inhouse, I wouldn't even consider it without atleast 4 nodes.

Money is a factor as well. zfs-send/receive requires front ends with the same amount of disks. With central storage we only have to buy one set of disks.
 
neat idea- sure.

design as suggested- relatively simple.

Making it work in real life without connectivity loss on failover, data loss, have engineering capability for the MANY edge cases, able to actually support a production environment- practically impossible. this as a DIY will cause you pain.

Its would be SAS connectivity, it should be a simple zfs import on failovers. I do get what your saying to a degree.
 
I'd also love such a setup, but it lacks a lot with respect to a HA SAN even without another datacenter node.

There is also "real" products you can buy that claim they worked it out:

https://zstor.de/en/zstor-ha-cluster-designs-e.html

Yea I have looked at those in the past, but when you get down to the nitty gritty they came in with a price tag pretty much the same as our Nimble iSCSI array's. Only so many of our customers can afford such a setup. We are trying to come up with a bit more cost effective solution.
 
Ceph requires 4-5 nodes to be a real HA setup. We have a full ceph cluster setup inhouse, I wouldn't even consider it without atleast 4 nodes.

Money is a factor as well.

Most of those in the project's examples had 4+ shelves full of disks.

If you can afford to drop thousands on disk, you can afford a couple hundred for a ceph node.

I love ZFS. I use it wherever it makes sense. This just seems like a use case that wouldn't make sense for proxmox, and doesn't seem like it would be that great of a solution anywhere really... it's like using a hammer to pound screws.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!