Does Proxmox plan to support SCSI over FC?

dortiz777

Member
Jul 17, 2021
2
0
6
33
I have several clients who have VMware connected to SAN storage using FC SCSI, due to the technology's performance, but I can't migrate them to Proxmox because FC doesn't offer all the functionality. Is there any vision for the future that will support this technology so we can finally phase out VMware?
 
Hi @dortiz777 , welcome to the forum.

As PVE uses Ubuntu derived Kernel and Debian Userbase for underlying OS functionality, the FC connected storage is well supported.
The primary option is to layer LVM on top of your FC LUN.
You can find some guidance in this article: https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/

While there are some features missing, the combination of SAN and PVE is successfully used by many.
Perhaps, you can be more specific in your desired needs?



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
Exactly the disadvantages mentioned in the link you sent. The issue of snapshots is very important because it's the technology used to create backups. What I understand is that it uses LVM to create shared storage and does so via iSCSI, adding an additional layer. In addition to adding the middle layer, it didn't utilize the performance that FC can have, which is currently available at 16Gb and 32Gb, and it will depend on the type of LAN connection you have without purchasing cards.
 
Last edited:
Exactly the disadvantages mentioned in the link you sent. The issue of snapshots is very important because it's the technology used to create backups. What I understand is that it uses LVM to create shared storage and does so via iSCSI, adding an additional layer. In addition to adding the middle layer, it didn't utilize the performance that FC can have, which is currently available at 16Gb and 32Gb, and it will depend on the type of LAN connection you have without purchasing cards.
Hi,

proxmox don't use storage snapshot for backup.

I'm currently working to add snapshots for shared lvm (no official target date, I'm hoping for pve9). It'll work with lvm over scsi|fc.
 
  • Like
Reactions: Johannes S
With any SCSI type solution (anything-over-fabric), block level snapshots, thin provisioning and overcommitment have to happen with the storage aware of which blocks are being used/changed. That means things like TRIM have to go from guest all the way to storage, I’ve rarely seen a setup where that is actively used and properly setup.

Proprietary storage/RAID typically doesn’t support this without “extra payment” or “custom licenses” on the most expensive tiers, there simply is too much overhead for a simple Pentium3-performance ARM chips w/ 512 or 1G of RAM you typically see in these controllers, the more expensive systems (eg Dell Power*, HPE Cray) are basically servers running OpenBSD.

As people mentioned above, the Linux kernel in Proxmox may support this at other levels or your guest may support this (eg LVM or ZFS), but yes, you’re adding another layer. Note that whenever you see a howto, you can replace iSCSI with FC or SAS or Ultra320 or even NVMoF, it’s all SCSI. You may be able to write a plug-in for your specific storage solution, but once you get it to work, most people just go with that and ignore the GUI stuff.

Hopefully that makes sense, it makes sense in my head. I’ve only recently seen vendors support virtualized storage with BlueField DPU, as you can imagine, that’s not cheap. I have some inherited storage like that for test/dev, we are simply passing LUNs to each individual host, pretending they are local disks and running Ceph across it, ignore the proprietary RAID capabilities since that’s bitten me in the past.
 
Last edited:
  • Like
Reactions: Johannes S
Ops question was covered, but I am curious.
Proprietary storage/RAID typically doesn’t support this without “extra payment” or “custom licenses” on the most expensive tiers, there simply is too much overhead for a simple Pentium3-performance ARM chips w/ 512 or 1G of RAM you typically see in these controllers, the more expensive systems (eg Dell Power*, HPE Cray) are basically servers running OpenBSD.
umm... what would you consider a product that fits the above description?

FC SCSI, due to the technology's performance, but I can't migrate them to Proxmox
performance you say? unless they bought a fc san in the very near past its probably not very fast at all (anything 16gb or less is pretty much ancient.) and if they DID buy it recently... why?! if its an older fc san and its already depreciated out, the discussion should be to replace it along with the hypervisors as a solution, not piecemeal.
 
We are running such a setup with OCFS2 on the shared LUNs and qcow2 images in the filesystem for several years now.

It works but is unsupported by Proxmox.

Can you tell more about using ocfs2 on shared luns and qcow2 and how it works? Like how it is setup?
 
umm... what would you consider a product that fits the above description?
I believe Dell Unity has thin provisioning drivers for OpenStack and VMware, they are basically Intel servers with SAS, FC and/or iSCSI connectivity, they have snapshots etc - I believe the quote for that is ~$130k for ~40TB usable in flash (after discount) and then add on $2k/year in licensing cost for each 'feature' such as snapshots, tiering, deduplication, replication etc - so it came out to about $250k for 40TB usable over 5 years.

Back in the day I used InforTrend which had it as an add-on option, that's where the whole "proprietary RAID" bit me - ah, your system is out of warranty, I guess you have to buy a new one to recover the data.
 
Last edited:
  • Like
Reactions: Johannes S
they are basically Intel servers with SAS, FC and/or iSCSI connectivity,
That describes effectively every storage product made in the past ~20 years, from qnap to a DDN Exascaler. What amused me was the comment "there simply is too much overhead for a simple Pentium3-performance ARM chips w/ 512 or 1G of RAM you typically see in these controllers" - I'm still wondering what you were referring to.

As for licensed features- why is that such a problem? supporting ever changing api requirements, security updates, bug fixes, etc takes engineering time. instead of charging for the whole stack, the devs made a conscious decision to make it a-la carte to make the solution available to a wider audience.
 
  • Like
Reactions: Johannes S
No, the low end stuff (eg PowerVault, MSA, etc) is pretty much a dumb LSI RAID controller with an embedded web server bolted on for configuration.

Problem with the expensive stuff is that licensing often doesn’t transfer ownership, so for homelabbers you end up with what used to be an expensive piece of equipment that beyond the basic stuff is non functional.

As I said, the experience I’ve had is that the data becomes unrecoverable as soon as you have had a few failures, regular disks do not even work, you need to buy the same Seagate disks with “their” sticker on it.

There is a point to be made for “I don’t need a storage engineer” which is true for many small setups, which is why cloud is so popular with those - the cost of cloud even though you pay for the physical hardware every 6-9 months is significantly lower than add-ons because you can’t afford an FTE. With the end of VMware that too is coming to an end though, after about 20 years people are finally realizing vendor promises are largely empty and open source can also provide value.
 
Last edited:
  • Like
Reactions: Johannes S
(eg PowerVault, MSA, etc) is pretty much a dumb LSI RAID controller with an embedded web server bolted on for configuration.
Not true.

Both Powevault ME50xx and MSA 206x lines use the dot hill controller core which is pretty darn sophisticated and powerful, and can saturate multiple 25gb links. oh, and neither charges extra for snapshots or thin provisioning.

Problem with the expensive stuff is that licensing often doesn’t transfer ownership,
sure they do.
so for homelabbers you end up with what used to be an expensive piece of equipment that beyond the basic stuff is non functional.
"Homelabbers" are not the target audience. why is this relevant?
As I said, the experience I’ve had is that the data becomes unrecoverable as soon as you have had a few failures,
If we're still speaking about the dothill based controllers (or the netapp based ones for the older products) I don't share your experiences. all those are rock solid. I'll grant you that all these are anecdotal datapoints, but I will say that those commercial products were sold by the thousands and had they been as, ahem, undependable as you suggest they would have cause massive lawsuits.
There is a point to be made for “I don’t need a storage engineer” which is true for many small setups, which is why cloud is so popular with those - the cost of cloud even though you pay for the physical hardware every 6-9 months is significantly lower than add-ons because you can’t afford an FTE. With the end of VMware that too is coming to an end though, after about 20 years people are finally realizing vendor promises are largely empty and open source can also provide value.
I dont really understand this point. Every product has its target audience. Any business that has its livelihood dependent on a data storage device needs to have assurance of its function, dependability, and integrity. If you can do that without a storage engineer, more power to you. As far as I know, most business owners are not data systems specialists and depend on vendor products/services to provide.
 
Last edited:
  • Like
Reactions: Johannes S
Not true.

Both Powevault ME50xx and MSA 206x lines use the dot hill controller core which is pretty darn sophisticated and powerful, and can saturate multiple 25gb links. oh, and neither charges extra for snapshots or thin provisioning.
Dot Hill at least a few years ago still used LSI SAS controllers. I think you mean “they upsell you the license included in the base price” and when you ask about leeway, start chopping off features. Not for nothing the manual has a list of license key options with expiration dates: https://www.dell.com/support/manual...3b331d-7d29-4644-a9ec-b866c961058d&lang=en-us

Multiple 25G links is a bit slow when more open providers use simple 100G Ethernet with an NVMe enclosure from the same vendors (badged Dell PowerEdge) for about the same cost.

sure they do.

"Homelabbers" are not the target audience. why is this relevant?

Resale value is a thing. Without the software the device is useless. When your license expires the device is useless.
If we're still speaking about the dothill based controllers (or the netapp based ones for the older products) I don't share your experiences. all those are rock solid. I'll grant you that all these are anecdotal datapoints, but I will say that those commercial products were sold by the thousands and had they been as, ahem, undependable as you suggest they would have cause massive lawsuits.
Not saying they aren’t working under the sold properties. But when you’re just outside the warranty period and your drive fails, it’s $600 for a 2TB drive, or something more serious like a controller failure - we no longer support that model, the data can only be recovered in that or a newer model.

I dont really understand this point. Every product has its target audience. Any business that has its livelihood dependent on a data storage device needs to have assurance of its function, dependability, and integrity. If you can do that without a storage engineer, more power to you. As far as I know, most business owners are not data systems specialists and depend on vendor products/services to provide.
Agree. Hence why using closed systems with locked in licensing are a really bad idea. I’m in the gov/edu space, not the first time I had to rescue a system years past its due date, hardware enclosed RAID is a nightmare to work with. The “bad stuff” doesn’t typically happen when the device is new, it’s when the device is outside its warranty period, unless you’re buying more shit from them they will bone you. And if you buy more shit, the prices keep going higher despite per-TB costs in the rest of the industry going down. I am currently in the process of abandoning our server vendor completely because once they thought they had locked us into their “Open” Management tool (we instead use Ansible RedFish) they started reducing the discounts. 5% makes a huge difference when you’re buying hundreds of servers per year and literally thousands of desktops, and we saw our discounts drop from 85% MSRP to 65% in just 3 years.
 
Last edited:
  • Like
Reactions: Johannes S
Dot Hill at least a few years ago still used LSI SAS controllers.
I think you're conflating "controller" for "bus adapter." LSI is the last remaining SAS bus chip vendor. The controllers themselves are, as you pointed out, a "Intel server with sas." Everyone in the industry uses them because Broadcom bought all the other vendors.

Multiple 25G links is a bit slow when more open providers use simple 100G Ethernet with an NVMe enclosure from the same vendors (badged Dell PowerEdge) for about the same cost.
I dont understand what you're trying to say. Are you saying any storage product is not useful if its not a "simple 100G Ethernet with an NVMe enclosure"? Or that you can saturate multiple 25gb links with a "a dumb LSI RAID controller with an embedded web server bolted on?"

Look, I get what your particular take is on storage, You have the skillset, time, and willingness to accept responsibility to provide service using nvof boxes or old decomissioned hardware. Good on you- but dont mistake that as a valid (or even good) business model some OTHER company should follow. If you really felt it was, maybe YOU should start a storage company...

I'll stop hijacking the thread now ;)
 
  • Like
Reactions: Johannes S
No, they use LSI SAS RAID controllers (again, I haven't cracked one open recently, but very few are going to be developing their own ASIC), just the configuration is mirrored over a serial or I2C bus between the two 'mainboards' if you have redundancy, or in some cases, just stored in the first few sectors of every hard drive.

25G iSCSI or 16G FC on any NVMe/SSD enclosure through a RAID controller is bottlenecking you. My point for the business end is: you can buy for the same price or cheaper buy an open source product TODAY that won't bottleneck you, that won't lock you in. 45Drives, Ubuntu, Red Hat, Nexenta, Proxmox... all available with managed services and hardware. There are plenty of things out there that shared storage with a proprietary RAID controller is simply a dead product for the last decade.

The only place I see them is certified places from the big guys that used to sell them 20 years ago for VMware, but now VMware is dying too. I have a rep right now trying to sell me FC storage on a Red Hat OpenShift cluster - like WTF dude, he literally told me this morning "we're not permitted to sell them with Ceph under x amount of nodes".
 
  • Like
Reactions: Johannes S