Recommendation for software-defined storage for Proxmox on OVH (with Veeam snapshot integration)

imadam

Member
Jul 31, 2021
10
1
23
53
Hi everyone,
we’re planning a setup on OVH dedicated servers where Proxmox VE will be our main virtualization platform, and we’ll have a separate Veeam Backup & Replication server for backups.
What we’re looking for is a software-defined storage (SDS) solution that:
  • can run on / with OVH dedicated servers (no access to custom on-prem SAN hardware),
  • exposes storage to Proxmox (iSCSI / NFS / something else that works well in practice),
  • has good integration with Veeam – ideally storage-level snapshots that Veeam can orchestrate (Veeam storage plugins / snapshot integration),
  • is reasonably mature and stable for production use.
We are not considering Ceph for this project, so I’d prefer to keep the discussion focused on other options.
If you have a similar setup (Proxmox on OVH + SDS + Veeam), I’d really appreciate if you could share:
  • which SDS solution you’re using,
  • how you’ve integrated it with Veeam (snapshot workflow, offload to backup repo, etc.),
  • any gotchas / limitations you ran into on OVH,
  • rough licensing / cost implications (if you can share).
Thanks in advance for any pointers or real-world experiences!
 
has good integration with Veeam – ideally storage-level snapshots that Veeam can orchestrate (Veeam storage plugins / snapshot integration),
Veeam's ability to utilize backend storage snapshot functionality to offload the backup workflow is limited to Vsphere. The Veeam PVE plugin integrates with PVE by hooking into primary QEMU process, somewhat similar to how PVE&PBS work (but not the same).

This is all to say that practically any SDS would be fine with Veeam because there are few if any storage dependencies.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
Our offering is designed for Service Providers and Enterprises running on-premises infrastructure. Many SPs directly compete with companies like OVH.


Whether it’s feasible to run our software on OVH depends on the availability of suitable server and network capabilities, and, of course, the budget. OVH’s business model is to maximize revenue from renting standardized servers, so once you factor in all the required components, the overall economics may simply not work out.


That said, we’ve never seriously explored this option. If your requirement is only 5–10 TB, it’s unlikely to be cost-effective. You’d probably be better off renting NAS storage directly from them. And if you need significantly more - on-prem starts making a lot more sense.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
@bbgeek17 thanks a lot for the explanation, that helps put things into perspective.
In our case we’re currently looking at two basic options:
  1. Buy 3 servers and colocate them with a local datacenter provider (where we’d have more control over hardware/network and could build an SDS cluster on top of that), or
  2. Rent 3 dedicated servers from OVH and run the SDS stack there.
The customer’s capacity requirements are actually relatively small (single-digit TB range), but they do need good performance/low latency – this is more about IOPS and responsiveness than about storing large amounts of data.

Given that, we’re trying to decide whether it makes more sense economically and technically to invest in 3 local servers with SDS, or to stay in the OVH ecosystem and accept the limitations you mentioned.

Your comment about OVH economics for SDS when you “only” need 5–10 TB fits our situation quite well, so we’ll definitely take that into account in our decision.
 
  • Like
Reactions: Johannes S
If you are searching for a PVE "in-house" solution you can also try HA + ZFS storage replication - but be aware it's async
 
  • Like
Reactions: Johannes S
If you are searching for a PVE "in-house" solution you can also try HA + ZFS storage replication - but be aware it's async
I am looking for something where I don't have to master rocket science. Coming out of Vmware env and our customers are willing to pay for support.
 
but they do need good performance/low latency
devil's in the details. What is "good" in this context? numbers under load would be a good start. kinda hard to hit a target that isnt visible/understood.

You’d probably be better off renting NAS storage directly from them
This is most likely the way.
 
  • Like
Reactions: Johannes S
Hi everyone,
we’re planning a setup on OVH dedicated servers where Proxmox VE will be our main virtualization platform, and we’ll have a separate Veeam Backup & Replication server for backups.
What we’re looking for is a software-defined storage (SDS) solution that:
  • can run on / with OVH dedicated servers (no access to custom on-prem SAN hardware),
  • exposes storage to Proxmox (iSCSI / NFS / something else that works well in practice),
  • has good integration with Veeam – ideally storage-level snapshots that Veeam can orchestrate (Veeam storage plugins / snapshot integration),
  • is reasonably mature and stable for production use.
We are not considering Ceph for this project, so I’d prefer to keep the discussion focused on other options.
If you have a similar setup (Proxmox on OVH + SDS + Veeam), I’d really appreciate if you could share:
  • which SDS solution you’re using,
  • how you’ve integrated it with Veeam (snapshot workflow, offload to backup repo, etc.),
  • any gotchas / limitations you ran into on OVH,
  • rough licensing / cost implications (if you can share).
If you take 3 servers in a local data center, you can play deeper there, more control over the hardware and network, less dependence on the provider's restrictions. and in the long run it is often cheaper than tuning SDS on OVH for the sake of a few terabytes. I myself caught myself thinking that it is better to have a solution that is easy to maintain and scale as needed than a "beautiful" but nervous one. By the way, I have seen a similar approach in other ecosystems, for example, in the setapp community they often write about choosing simple and understandable solutions for macOS and Apple equipment, where stability and UX are more important than fashionable features. The logic is the same here. I would start with a simple option, and if the client really grows in terms of requirements, then think about more complex SDS scenarios.
Thanks in advance for any pointers or real-world experiences!
You can go simpler, for example, with StarWind VSAN or even ZFS replication between nodes, and on top of that, make backups via Veeam without storage level snapshot integration, because it doesn't always provide the magic promised in marketing.
 
  • Like
Reactions: Johannes S
Veeam without storage level snapshot integration, because it doesn't always provide the magic promised in marketing.
There is no storage level snapshot integration in a Veeam+PVE combination for any storage type.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Interesting setup — especially with the Ceph constraint, that narrows things in a useful way.

From what I’ve seen in similar Proxmox + OVH environments, teams usually land on solutions like ZFS over iSCSI (with something like TrueNAS) or go with platforms like LINSTOR/DRBD or StarWind VSAN. The tricky part isn’t exposing storage to Proxmox — it’s getting clean snapshot coordination with Veeam without introducing fragile layers.

One thing to watch: even if a solution “supports snapshots,” Veeam integration often depends on how well the storage exposes those snapshots via APIs/plugins. In practice, many teams still rely on hypervisor-level snapshots because storage-level orchestration can get messy in distributed setups.

A bit of a parallel from lead gen for IT and infrastructure companies (worked closely with SalesAR on a few cases) — the pattern is similar: the tech itself is rarely the bottleneck, it’s how well components coordinate under load and edge cases. Storage stacks behave the same way.

If you want something stable:

  • prioritize predictable failover over “feature richness”
  • test snapshot + restore workflows under load (not just in isolation)
  • validate how Veeam handles edge cases like partial node failure
 
  • Like
Reactions: Johannes S