> So you basically run blockbridge on any x86 Server with nvme/ssd drives and connect the server simply over iscsi to Proxmox?
Blockbridge performs the install on pre-qualified systems due to the amount of tuning they do to get the absolute best performance. You can imagine the differences between servers in terms of processor architecture, number of chiplets, number of cores per chiplet, bus speed, dual veruss single processor, as well as NIC support. They have tunings for Mellanox NICs, for example, but have done some tunings for Intel as well. It makes a huge difference in the latency when everything is set correctly for the given workload.
> Or do they use their "own" special protocol similar to iscsi?
Thankfully, there are no proprietary protocols, custom kernel drivers, or specialized networks (i.e., FC, RDMA). They support iSCSI/TCP and NVMe/TCP. Having native kernel support simplifies maintenance and PVE upgrades. Both protocols are speedy. I run both. NVMe/TCP has lower latency.
> Another question comes into my mind, everyone is talking about the great performance, compared to any other solution, but in my findings i found iscsi a lot slower as fcoe in terms of latency.
> But that was some time ago tho, fcoe is probably dead by now xD
Their iSCSI implementation is quite fast, so I definitely wouldn't qualify it as "slow", but NVMe/TCP is best for latency. I've always been a big fiber channel fan, and we still have a bunch of fiber channel switches running in our environment with fiber channel HBAs. It just works and runs forever with super low latency, but I never did this with FCoE. In any case, you are right that fiber channel is essentially dead.
NVMe-oF is the latest low-latency technology that has so many benefits from large queue counts to low latency command execution that it is hard to beat.
> And a screenshot of the Proxmox blockbridge part of the GUI would be really appropriated, tho you can mask all stuff that's not meant for public
There are a lot of pieces to the GUI. Note that they also have a CLI that can perform all of the functions too. I would suggest contacting Blockbridge for a demo if you wanted to see the GUI, but it would also give you an opportunity to meet the team, discuss your workloads and environment, and get first-hand responses from the company.
> First time here that i hear blockbridge, it's probably interesting for my company either.
They have been around for a long time - but have been in somewhat of a niche market since everyone wants everything for free - and chooses Ceph (which is dog slow). You get what you pay for, unfortunately.
> Just license per TB of Storage makes me a bit of headaches, because that makes only sense for really fast nvme Storage, not so much for big hdd Storage pools.
Blockbridge is a flash-only system, so you can't use HDDs with it. We use Ceph for HDD storage - which is what it is good at (not SSDs). For object storage, Ceph isn't great in terms of efficiency (neither performance nor storage requirements), so we have been looking around for alternatives, even if we have to go with a commercial solution.
> Im just saying, because using 2 different Storage types let's say 1x nvme blockbridge and 1x iscsi freenas, doesn't look that beautiful in the eyes of an admin
True, and understandable, but the two storage types (SSD and HDD) are very different beasts. I wouldn't suggest iSCSI for hard drive storage - I would use Ceph with RBD or if you don't have a huge environment, TrueNAS (or FreeNAS). We have had issues with legacy equipment working with TrueNAS, so had to use older FreeNAS virtual appliances. It just sits there and works fine, but you don't want to manage 100's of TiBs with it.