sheepdog / ceph questions

Hello

How close is sheepdog to being production ready?

Can ceph run on pve nodes?


thanks

Ceph is ready for production, you can also have commercial support from intank team.
Yes, you can run ceph from proxmox host. (proxmox 3.0 - wheezy). (Use xfs as filesystem, as I don't think that btrfs is ready with current redhat-proxmox kernel)

Sheepdog is not ready for production, they are a lot of changes for the moment, sheepdog 0.6 (coming for proxmox 3.1) break the current sheepdog cluster format.
sheepdog 1.0 should be ready for the end of the year.
 
highly available storage running on pve nodes?
that in itself is a contradiction, you have to physically separate storage (boxes) and VM hosts. combining both in the same physical server is one big disaster waiting to happen

DRBD is really more of a last resort (because it only stores 2 copies of the data) if you cant have anything else
 
Last edited:
combining pve host and storage means that the cluster has more places to run.

with pve and storage seprate more hardware is needed.

our plan is to use 'pve desktops' spread out so if there were a fire in the server room, then the pve desktops would take over to run our 6 essential vm's.. currently we have each of those pve-desktops run utility kvm's that do not need a cluster.


we've used drbd since 2005 , and can wait a year or 2 until sheepdog is production ready to run on pve. or ceph on pve.
 
RE: you have to physically separate storage (boxes) and VM hosts.

Could you explain why they should be separated , or send link or search words? I'll research ...
 
distributed storage produces high load in many situations which will have negative impact for your vitalization host and your virtual machines.

and high load on your virtual machines will have negative impact on the distributed storage.

thats the main reason why no one really recommends this, but feel free to dig deeper, read http://ceph.com/docs/master/install/ and read/post in the ceph or sheepdog mailing lists.

but of course, there are people around doing this.
 
Lets see if I get it:
PVE could run on server grade hardware. using a raid-10 ssd hardware array of 400+- GB .
KVM storage would be on some dedicated ceph storage systems.
I'd run local network services using openvz.
 
alternatively: store the openvz containers on a remote NFS system. you could probably even have a raid1 in two ceph boxes and DRBD those two or use any other way you know of to replicate NFS storage. however since DRBD only allows for 2 copies of the data you should "manually" (via cronjob) sync the data to a third location and/or to backup tape every day.

re: why to keep storage and virtualization servers apart: storage replication uses up bandwidth, quite a lot of bandwidth actually after a drive failure (so that ceph has to replicate all the data the faulty disk had)
... bandwidth you dont have available to your VMs/CTs
 
a couple of questions:

1- is nfs on wheezy with raid10 and drbd a fast enough* host for nfs ? Our data totals about 200gb .

2- in the past using nfs for kvm storage would get slow [the issue could have been with the configuration we used]. my question is : does KVM on NFS generally work OK?

*we use server grade hardware , have at most 40 users and change about 40MB of data in an hour at busy times. our main concern is availability .using a gigabit network between managed switches.


thanks for the advice so far..
 
a couple of questions:

1- is nfs on wheezy with raid10 and drbd a fast enough* host for nfs ? Our data totals about 200gb .

2- in the past using nfs for kvm storage would get slow [the issue could have been with the configuration we used]. my question is : does KVM on NFS generally work OK?

*we use server grade hardware , have at most 40 users and change about 40MB of data in an hour at busy times. our main concern is availability .using a gigabit network between managed switches.


thanks for the advice so far..

2) Hi, I'm running around 300vm on a netapp storage through nfs, without problem.

1) It's totally depend of your workload, but having a raid hardware controller with a battery for writeback is always good to handle random write.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!