SheepDog on ProxVE 2.0 - General state of things query (Performance, stability) ?

fortechitsolutions

Renowned Member
Jun 4, 2008
434
46
93
Hi, I'm just curious to ask if anyone can comment on recent real-world / hands-on experience with Sheepdog baesd storage for their ProxmoxVE 2.0 environment, ie,

- general comment on platform tested:used (ie, SAS disks in HW Raid10, 3 nodes as sheepdog member? SATA disks in HWRaid6, 3 nodes? SATA disks, no raid / only sheepdog replicas for redundancy? :)
- general stability
- general performance
- general thoughts, suitability for production .. soon .. or still a while before it is solid enough ?

I gather that the current implementation will likely be limited to single-nic connectivity between member nodes for sheepdog traffic, ie, max theoretical throughput of ~100Mb/s approx assuming decent gig-ether network between proxve2 nodes. Although possibly? if you do LACP trunked interfaces for your proxmox node connectivity you get around this 'transparently' (not sure if anyone has tested) ?

I've been reviewing the 'state of things' a bit more in .. other VM platforms .. for similar solutions and was curious again about this for ProxVE. (Maybe is Ceph more or less mature than previously documented? Although I have impression that currently SheepDog seems like better candidate for moving forward here ?)

Any comments:thoughts:insights are certainly most welcome.

Many thanks!


Tim Chipman
 
so far ceph block storage looks more stable to me. I would invest free test time also there.

afaik you can already get/buy support from inktank.com for running a ceph cluster. my last tests with sheepdog are some time ago, so maybe there is some progress and I would like to hear other reports also.
 
from my tests, sheepdog is not enough stable for me. (I have node crashing regulary).
Bench show around 40000iops with random write, and more than 10GB bandwith with sequential.
Last version have a journal feature, so you can boost your HDD cluster with journal on ssds.
A New version is coming soon.

Ceph is really stable for me. our benchmark show around 20000iops random write limit, but more than 10GB bandwith with sequential tests.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!