Enterprise Alternatives to VMware

DarkOlive

New Member
Nov 29, 2022
1
0
1
With VMware transitioning to subscription-only licensing, and deprecation of a lot of their products. I'm exploring alternative virtualization platforms suitable for an enterprise environment we manage, which includes:
  • 12 hosts with 600 VMs at the main site.
  • 6 hosts with 20 VMs at a hot site with storage mirroring.
  • 42 remote sites with basic virtualization needs.
  • Running Fiber-Channel Storage
Does anyone have experience with Proxmox VE in a distributed and expansive environment like ours?
How does it handle large-scale deployments?

We're considering necessary features such as clustering, centralized management, live migration, storage, backup strategies, and more.
 
Don't have experience using proxmox in enterprise but DRS is not a feature in proxmox yet ( 22/1/2024). It is however in their roadmap
 
Clustering,centralized management,live migration ,storage and backups are all lets say integrated in Proxmox ecosystem.
For now DRS isn't there, there are some alternatives(ceph mirroring,etc).
Since you have a lot of remote sites,for you ideally you would be using multi-datacenter management tools.
 
It is worth separating concerns.

First, your core site should be fine. 600 VMs isn't large relative to what we see in our customers. My own experience suggests that you will likely run into scalability challenges with Fibre Channel. You will undoubtedly want live migration and high availability, which means you'll need to configure thick-LVM (if you maintain your FC footprint). Aside from the lack of snapshots and thin provisioning, it will not feel very "enterprise" due to all of the slicing and dicing of LVM.

Second, the question about the 42 remote sites warrants more clarification on requirements:
  • What are you using today? ESX and VEEAM?
  • Do you need HA?
  • Do you need edge-to-core centralized backup?
  • What are your RPO and RTO?
  • Do you need to migrate VMs between sites (or back to the core)?
For total transparency, please note that I work for a storage company.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
Works for live migration but without snapshot support. Internally, the default is thick LVM (block storage) and therefore no VM snapshot support.
I wonder why ZFS is listed as not shared storage, but ZFS+iSCSI is explicitly listed as supported for shared storage.

So drivers for major HBA vendors should be there, but it will be limited to LVM backend?
 
I wonder why ZFS is listed as not shared storage, but ZFS+iSCSI is explicitly listed as supported for shared storage.
"ZFS over iSCSI" is a somewhat unique storage plugin implementation that combines two technologies. Personally, I think the plugin is misnamed, its more "ISCSI over ZFS".
The essence of the technology is the ability of the client (PVE) to ssh into the Storage (Linux or FreeBSD based) and issue ZFS management commands to create/delete/extend ZFS volumes directly on the Storage. Obviously a major requirement here is for the Storage to be ZFS based internally and for it to allow ssh and manual ZFS management.

Once a volume is created, it is then exported as raw device via iSCSI to the client (PVE). Again - the storage implementation must allow such exports to be created. At this point the client is not aware that its accessing a ZFS volume in the backend. This is similar to any SAN - your client doesnt know the internal structure of the LUN, ie RAID behind it.

As the final client connection is over iSCSI, the storage can be mapped/unmapped on the demand. This is similar to Blockbridge plugin implementation. It is also very different from the VMware where the LUN is presented to all server at the same time and VMFS is used as Cluster Aware File System.
If we want to be really nit-picky, the storage is not Shared in common sense, but the connecting fabric is freely moved on demand across the nodes.

ZFS on its own is not capable of arbitrating simultaneous multi-client access on a multi-host attached LUN, hence its not compatible with Shared Storage.

Hope this explanation helps a bit.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: fabian
"ZFS over iSCSI" is a somewhat unique storage plugin implementation that combines two technologies. Personally, I think the plugin is misnamed, its more "ISCSI over ZFS".
The essence of the technology is the ability of the client (PVE) to ssh into the Storage (Linux or FreeBSD based) and issue ZFS management commands to create/delete/extend ZFS volumes directly on the Storage. Obviously a major requirement here is for the Storage to be ZFS based internally and for it to allow ssh and manual ZFS management.

Once a volume is created, it is then exported as raw device via iSCSI to the client (PVE). Again - the storage implementation must allow such exports to be created. At this point the client is not aware that its accessing a ZFS volume in the backend. This is similar to any SAN - your client doesnt know the internal structure of the LUN, ie RAID behind it.

As the final client connection is over iSCSI, the storage can be mapped/unmapped on the demand. This is similar to Blockbridge plugin implementation. It is also very different from the VMware where the LUN is presented to all server at the same time and VMFS is used as Cluster Aware File System.
If we want to be really nit-picky, the storage is not Shared in common sense, but the connecting fabric is freely moved on demand across the nodes.

ZFS on its own is not capable of arbitrating simultaneous multi-client access on a multi-host attached LUN, hence its not compatible with Shared Storage.

Hope this explanation helps a bit.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
now it's clear. Evidently the naming is odd.

Now that you mention how Proxmox just manipulates the provisioning, I wonder how difficult would it be to implement something like VVols commanding a storage vía API or CLI. That should enable shared disks vía LUN sharing, and even allowing advanced services vía the storage box (snapshots, thin provisioning, replication)

I understand that regardless where the VM disks reside, the actual VM definition always goes to /etc/pve (pmxcfs). Implementation would be less capable than VMware's VVol, but also simpler.

Any hints on how storage plugins should be built to be accepted as contribution?
 
Now that you mention how Proxmox just manipulates the provisioning, I wonder how difficult would it be to implement something like VVols commanding a storage vía API or CLI.
Its possible, we did it. Blockbridge implementation is often compared to VVols in VMWare. However, Blockbridge storage was designed to be driven via API from day 1. It would be more difficult for storage products where API is a bolt-on.

You also have to worry about scale. 600 VMs means that you have at least 600 LUNs. Not to mention Cloud-Init and/or EFI. I suspect traditional SANs will not be as receptive to 1800 LUNs as more modern solutions.

Even if you create the Plugin, its not going to be accepted to official PVE repository unless the storage it controls is Open Source. Blockbridge is not Open Source product, so we fully support the Plugin for our customers. It is part of our CI infrastructure with multiple daily test runs. We also test all supported versions of PVE, and keep track of any updates to ensure compatibility.

In short, its doable but time consuming and hence costly.

You can find examples of various plugins here: /usr/share/perl5/PVE/Storage/ or here https://github.com/proxmox/pve-storage/tree/master/src/PVE/Storage



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: fabian
Its possible, we did it. Blockbridge implementation is often compared to VVols in VMWare. However, Blockbridge storage was designed to be driven via API from day 1. It would be more difficult for storage products where API is a bolt-on.

You also have to worry about scale. 600 VMs means that you have at least 600 LUNs. Not to mention Cloud-Init and/or EFI. I suspect traditional SANs will not be as receptive to 1800 LUNs as more modern solutions.

Even if you create the Plugin, its not going to be accepted to official PVE repository unless the storage it controls is Open Source. Blockbridge is not Open Source product, so we fully support the Plugin for our customers. It is part of our CI infrastructure with multiple daily test runs. We also test all supported versions of PVE, and keep track of any updates to ensure compatibility.

In short, its doable but time consuming and hence costly.

You can find examples of various plugins here: /usr/share/perl5/PVE/Storage/ or here https://github.com/proxmox/pve-storage/tree/master/src/PVE/Storage



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
thanks for your comments!
 
Hi, about remote-sites, currently proxmox gui in really for local cluster (or cluster with low latency < 10ms ), and need quorum to work (as you don't have any master/slaves in proxmox, all members are master and can write in /etc/pve).

They are no official central gui to manage multi cluster , or multiple single hosts.
(so you need to connect to the web gui of each remote site)

I have a friend working on this app to centralize server (still beta)
https://cluster-manager.fr/

Maybe some other app exist, I really don't known.
 
  • Like
Reactions: esi_y

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!