Feature Request - Cinder

BobWads

New Member
Feb 2, 2024
6
1
3
Hello,

We are looking to switch to Proxmox from VMWare and have the usual VMWare setup.

3x Dell R760's, 1x Powerstore 500T using iSCSI. (and a DR site with the same, SAN replication, DRS, etc)

Since Proxmox does not have support for a cluster file system, we will lose support for snapshots and thin provisioning; unless we roll back to NFS/SMB which will drastically reduce our storage performance.

I understand that the onus for a Proxmox storage plugin is on Dell in our case, however they do have a Cinder plugin.

Looking around a lot of SANs have a Cinder plugin, since Cinder is an opensource implementation, I suspect it wouldn't be an insurmountable task to incorporate it into Proxmox.

This would provide support for a wide variety of SANs and other storage solutions for Proxmox customers.

I suspect in the coming couple years there will be a very large inrush of Proxmox customers from VMWare switching for the same reason as us, price. This would make it dead simple for a large swath of new customers to use their existing storage, without having to waiting for storage vendors to catch up.

Just to give you a little context on why I believe there will be a huge inrush coming, our licensing went from $19K last year to $41K this year, this is happening across the board and I suspect a mass customer exodus from VMWare.
 
Last edited:
Everyone knows what's going on with VMware, and everyone is looking for Proxmox to jump through hoops to make themselves a frictionless drop-in replacement for VMware, and that's just not how it's going to work in the beginning. I hope they get a ton more interest and investment though, but man oh man do I cringe at the thought of too much low-ROI influence coming in from VMware guys their expectations.

VMware was owned by DellEMC. The opportunity to push SAN products onto their VMware base was unavoidable.

"Since Proxmox does not have support for a cluster file system..." lol, what? Be honest now, no judgements... how many minutes of research have you done?
 
Everyone knows what's going on with VMware, and everyone is looking for Proxmox to jump through hoops to make themselves a frictionless drop-in replacement for VMware, and that's just not how it's going to work in the beginning. I hope they get a ton more interest and investment though, but man oh man do I cringe at the thought of too much low-ROI influence coming in from VMware guys their expectations.

VMware was owned by DellEMC. The opportunity to push SAN products onto their VMware base was unavoidable.

"Since Proxmox does not have support for a cluster file system..." lol, what? Be honest now, no judgements... how many minutes of research have you done?

Hey now, no need for the hate. I've been using proxmox in my homelab for about a year and have been researching a migration strategy, in earnest, for several weeks.

I don't want this to go down the same combative road as this post.

Ceph is a clustered storage provider, not some filesystem I can thrown on a LUN, same for Gluster; I won't be able to use OCFS2 or GFS2 as it's not something that is supported, which is needed for enterprise environments.

The drop-in for VMWare is what could throw Proxmox into the main enterprise space; this feature would make it so that the transition from VMWare could be easier, cheaper, and quicker; spurring the growth of Proxmox's market share.
 
Like already said, PVE is doing stuff quite differently. It's for example block storage centric and not filesystem centric like VMware products.
 
Like already said, PVE is doing stuff quite differently. It's for example block storage centric and not filesystem centric like VMware products.
You're absolutely right, when I figured just how proxmox used ZFS for VM's and containers it blew my mind.

But this is about work, initial cost is a factor, labor is a factor, ROI is a factor.

Adding support for Cinder would make it so that existing storage systems can be used in production and maintain their feature sets.

I don't get how adding a feature that would have such a large impact for potential customers would be discouraged.
 
Ceph is a clustered storage provider, not some filesystem I can thrown on a LUN, same for Gluster; I won't be able to use OCFS2 or GFS2 as it's not something that is supported, which is needed for enterprise environments.
CEPH can absolutely meet your requirements and, in my opinion, is clearly superior to VMware vSAN simply because it is open source.

CEPH can offer block storage with RBD, object storage with Swift and S3 API and even a distributed file system with CephFS. In this regard, GaneshaNFS is also a good way to make CEPH NFS capable. In theory, you can also create iSCSI gateways, although CEPH support for this was discontinued in 2022.

But yes, you cannot use such classic medium-sized infrastructures with overpriced storage systems as you did before with VMware.
In my opinion, CEPH and Proxmox VE can definitely hold a candle to VMware with the basic functions and this variant only costs a fraction of what the VMware and EMC storage stuff would cost. Many medium-sized companies don't look at what alternatives are available and continue to use central iSCSI storage, which is equipped with expensive support contracts and supposedly specially certified components.

Maybe see it as an opportunity to move away from the VMware / EMC world and get to know CEPH and Proxmox.

Basically, you can replace VMware with Proxmox, but it won't work exactly the way you knew it.
 
Proxmox is already well known in the main enterprise space. When you talk to a Dell sales rep and wind up going with whatever host/hypervisor/SAN combo they recommend, that is more like SMB with a members only jacket thrown in as a gift. Not enterprise.

Cinder MAY eventually come from a paid third party if the market demand reaches a breaking point, or a nice guy will provide unofficial support with some scripts on his github.

I don't mean to dig at you personally, I just see an avalanche coming of thousands of users in your exact position (need my vendor locked SAN feature set) and I don't even know what to make of it. You cannot just mix these 2 wildly different cultures directly. I don't see the point of PVE guys devoting time to an issue that will be washed out within 1 product cycle.


I love that you read that old post though. Even though it was 4 years ago, it still describes impeccably the personality and skill disparities that will arise when (from 1 point of view) spoiled unskilled VMware noobs who are only trained to do 1 thing in a very specific way are flung at the mercy of the PVE and FOSS community where the culture is 100% focused on problem solving and self-reliance and 0% on vendor plugins.

A lot of people will pay the new higher pricing for VMware because these people need to maintain operations at their comfort\skill level and extract max value from hardware that was purchased with only 1 kind of infra in mind. Hopefully those have learned their lesson and will build agnostic infra from scratch in the future.

Some people with a passion for hardware, want to learn scripting, automation, etc and do everything themselves are the natural enthusiasts that will be able to see right away how superior Proxmox is. Others are willing to pay for 1-click integrations and a more attractive glossy GUI that make them feel like rockstars, that's all perfectly fine. VMware is not going away, but they are going to get what the market will bear.

Let's not kid ourselves about the vendor-dependent culture that was deliberately intended by VMware's nursery-like user experience and infrastructure design constraints and the associated plugin madness. This was a great setup, years in the making.

btw 3 hosts and 1 SAN, each in 2 sites...that is also not a cluster, it was maybe sold to you as such. My #1 issue with Dell was how they kept pushing SAN infra long after it was shown to be passe. Completely unbalanced fault tolerances too.
 
Last edited:
  • Like
Reactions: LnxBil
CEPH can absolutely meet your requirements and, in my opinion, is clearly superior to VMware vSAN simply because it is open source.

CEPH can offer block storage with RBD, object storage with Swift and S3 API and even a distributed file system with CephFS. In this regard, GaneshaNFS is also a good way to make CEPH NFS capable. In theory, you can also create iSCSI gateways, although CEPH support for this was discontinued in 2022.

But yes, you cannot use such classic medium-sized infrastructures with overpriced storage systems as you did before with VMware.
In my opinion, CEPH and Proxmox VE can definitely hold a candle to VMware with the basic functions and this variant only costs a fraction of what the VMware and EMC storage stuff would cost. Many medium-sized companies don't look at what alternatives are available and continue to use central iSCSI storage, which is equipped with expensive support contracts and supposedly specially certified components.

Maybe see it as an opportunity to move away from the VMware / EMC world and get to know CEPH and Proxmox.

Basically, you can replace VMware with Proxmox, but it won't work exactly the way you knew it.
I hear you and Ceph sounds like a great solution for our next hardware refresh, but the servers are less than a year old and the current SAN is just over 1.
 
  • Like
Reactions: Dunuin
CEPH can absolutely meet your requirements and, in my opinion, is clearly superior to VMware vSAN simply because it is open source.

CEPH can offer block storage with RBD, object storage with Swift and S3 API and even a distributed file system with CephFS. In this regard, GaneshaNFS is also a good way to make CEPH NFS capable. In theory, you can also create iSCSI gateways, although CEPH support for this was discontinued in 2022.

But yes, you cannot use such classic medium-sized infrastructures with overpriced storage systems as you did before with VMware.
In my opinion, CEPH and Proxmox VE can definitely hold a candle to VMware with the basic functions and this variant only costs a fraction of what the VMware and EMC storage stuff would cost. Many medium-sized companies don't look at what alternatives are available and continue to use central iSCSI storage, which is equipped with expensive support contracts and supposedly specially certified components.

Maybe see it as an opportunity to move away from the VMware / EMC world and get to know CEPH and Proxmox.

Basically, you can replace VMware with Proxmox, but it won't work exactly the way you knew it.
Fully agree. Still would be very useful to have a good option to make use of those existing SANs you already paid a crapton for until it's time for some new hardware.
 
I hear you and Ceph sounds like a great solution for our next hardware refresh, but the servers are less than a year old and the current SAN is just over 1.
You will be absolutely fine staying on VMware for the remainder of this cycle, and plan better for the next. You might not get support and software updates, but they are not going to turn off your stuff.
 
The main problem is here, you bought a vmware-endorsed san, which is maximally locked, and then you ask proxmox guys(100% open source) to fix this problem. So yes, it is hard to please someone who's left with locked-in tech.
 
The main problem is here, you bought a vmware-endorsed san, which is maximally locked, and then you ask proxmox guys(100% open source) to fix this problem. So yes, it is hard to please someone who's left with locked-in tech.

The thing is, we're not in the minority, Proxmox is a great product, but has a less than a 1% market share. With VMWare's insane price hikes, people will be looking for alternatives and this could be the last piece of the puzzle to aid people in switching.

So if we're going to be spending a decent amount of money purchasing support from Proxmox and integrating with Cinder will solve our storage issue, still be open source, and benefit a huge swath of customers; I'm definitely going to ask.

Regarding our SAN in particular, please look up the specs for the 500T. In addition to it's VMWare integration, it also has integration for Kubernetes via their container storage interface, and OpenStack via Cinder.
 
It's just weird to get newcomers into a community who rather than try and learn our way of doing things, they want to force an unnatural marriage between 2 otherwise incompatible systems....yes we get you have invested tons of cash in hardware and then the rules were changed on you.

Still, it is not for a free community to solve it, and Proxmox is 100% free software. If you want to support the project and need access to the enterprise repos, you can pay $500 per year per socket or whatever, it's really a pittance against to VMware licensing, there is no comparison, but don't get your hopes up with niche feature requests. What an awkward introduction.

I've had a feature request on the books for 5 years with probably 20 other guys on the bug tracker agreeing that it should be changed, and I got nothing, I worked around it.

In free software communities, you do not complain. You contribute. Whoever smelt it dealt it! Solve this yourself pay it forward.

Hire some developers, knock this baby out, and then market and sell the Cinder integration yourself to this huge SAN install base that is coming down the pike? An entire arm of your business could be nursing the entry-level VMware community through the transition.
 
Bob, the reaction you are getting stinks because it's purely emotional. Beyond the emotion is the fact that Cinder was and is a disaster. Yes, there is code that does dynamic attachments. But who tests it? Who supports it? What happens when you update the software in your system? Are the updates transparent?

The Proxmox devs would be nuts to take this on. Combining Cinder with Proxmox would be a Rube Goldberg experiment that will end up with many unhappy support requests. There are just so many issues with this that it's not even worth a flame war. Not to mention the fact the Proxmox team isn't about to start integrating Python components.

This is all to say that you chose your storage for support and compatibility with VMware. You may want to align with best practices for Proxmox or deal with companies who support what you need. If it's not well tested, deployed, and supported... assume it's broken
 
BTW, I just looked at the Cinder CI stats after writing my last message. You will be surprised that even the EMC cinder drivers are not passing with a 100% success rate. Who's problem would that be?
 
Proxmox devs are wary of taking outside projects, my opinion is since the whole drbd license problem,where they changed the license overnight.
 
The problem with many proprietary storage vendor is that they simply say: "We do not support Proxmox". It won't help if we implement a Cinder plugin, because the other endpoint is still on the storage box and controlled by the storage vendor. So we could never debug or fix problem on that side of the Cinder implementation.

That is why proprietary storage vendors need to implement proxmox storage plugins by themselves, and officially declare support for that.
 
Thanks for the link. The number is "The number of companies tracked by 6sense that use Proxmox".
No idea what "tracked by 6sense" means exactly, but the "3,078" companies seems a bit too less.

The right numbers
Proxmox VE have more than 15000 paying customers and hundreds of thousands without a subscriptions, in total more than a million unique hosts (daily accessing our update infrastructure)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!