Shared Remote ZFS Storage

what would really make everyone( who isn't a "died in the wool" debian admin) happy is a PSS solution from Proxmox
I dont think this is the arguable point. Id love to have proxmox gmbh provide every service I use in my enterprise, but that doesnt make it a value proposition for THEM. Perhaps you can convince them this is a pool they want to swim in- a good place to start is a feature request in their bugzilla, but if it were me you better have a good argument about what's in it for me. a storage solution is a big undertaking.
 
I dont think this is the arguable point. Id love to have proxmox gmbh provide every service I use in my enterprise, but that doesnt make it a value proposition for THEM. Perhaps you can convince them this is a pool they want to swim in- a good place to start is a feature request in their bugzilla, but if it were me you better have a good argument about what's in it for me. a storage solution is a big undertaking.
I'm betting 90% of the code is PBS and Proxmox gmbh knows intimately which iSCSI target the expect....
 
There is a market for it... but can the SMB market afford BlockBridge? i'm not asking for a full featured dual path backend like , NetApp, Dell\EMC or even Blockbridge. just a simple PVE compatible "ZFS Over iSCSI" stand it up and enjoy it like the rest of the Proxmox solutions...
The problem is the HA part. It does not make sense to run a HA cluster with a storage that is not HA itself. You're building in a SPOF and (sorry again) no sane person would want to run a HA cluster this way. This is a recipe for disaster. The hardware cost for the storage box is better spend in NVMe to build a CEPH cluster.

I'm betting 90% of the code is PBS and Proxmox gmbh knows intimately which iSCSI target the expect....
PBS is written in Rust and has nothing to do with iSCSI nor ZFS replication nor the code base of PVE, which is written in Perl. I don't understand why you're refering to it again and again.

The "server part" of the ZFS-over-iSCSI implementation is installing lio, targetcli and ssh. Once setup, there is nothing to configure. I don't need "a product" for this. The software behind this is already taken care of by the Debian maintainers.
 
The hardware cost for the storage box is better spend in NVMe to build a CEPH
In the enterprise this is not a good option. It is difficult enough staffing your IT with competent dependable people to begin with; the more you can farm out to function suppliers (eg, storage) the better. Ceph make sense when you have sufficient IT competence including ceph- There is a reason people fork over 6-7 figures for Nimbles, EMC AFAs, etc.

I'm betting 90% of the code is PBS and Proxmox gmbh knows intimately which iSCSI target the expect....
Only response I have for that is stay away from the casino ;)
 
  • Like
Reactions: bbgeek17
From what I understand, you just want Ceph. Blockbridge = Ceph, VMWare vSAN ~= Ceph. There is native support for iSCSI in Proxmox, there is native support for ZFS in Proxmox, you need to share it out, set up a container with Samba or NFS, a list of LinuxServer daemons is natively in Proxmox.

The whole point of Ceph in Proxmox is that you don’t need specialized IT staff to set it up. It’s dead simple, I am an advanced enterprise user and haven’t ever seen the need to tweak much of the Ceph except to enable Prometheus monitoring. There are some things you can do to improve performance, but that is only if you truly need to squeeze the bottom out. If the cost is to add another $15k server or hire a person, the choice is quickly made. If you can get 10% more performance out of several racks of hardware, it may be worth investing in an engineer.

If you don’t want to deal with the Ceph stuff, Canonical and others have professional services, and fairly sure the company behind Proxmox or one of its resellers can offer those as well - if it’s worth spending $2k to get a small performance improvement but native Ceph, with a simple 25 or 100G network, it’s really plug and play and the defaults work surprisingly well until you get to 100s of nodes with 1000s of disks.
 
Last edited:
@guruevi, at Blockbridge, we do not utilize Ceph (or ZFS) in any capacity. Instead, we have developed a purpose-built storage stack that includes comprehensive implementations of SCSI, iSCSI, NVMe, and NVMe/TCP. Our design objectives differ significantly from Ceph's, as we focus on delivering low-latency/high-IOPS storage solutions tailored to MSPs, CSPs, and enterprises needing non-disruptive storage management & 24/7/365 critical support.

Some people overlook the significant investment required to build and support software. Developers, QA teams, release engineers, documentation, and testing infrastructure all come with costs. Moreover, when you're developing a product that is constantly evolving while also needing to maintain compatibility with another ever-changing product, the effort required for testing and support grows exponentially—it's an n^2 challenge that only becomes more complex over time.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!