Understanding ceph

MimCom

Active Member
Apr 22, 2011
204
3
38
Southwest NM
Now that we have both ceph and glusterfs options available to us, I wanted which was best for particular use cases. While I have yet to definitively answer that, I did come across a truly excellent series of talks from 2013 linux.conf.au http://www.shainmiley.com/wordpress/2013/06/05/ceph-overview-with-videos/

I suggest starting with the third video (architecture overview and deployment discussion), then the first (software overview with tutorial), and finally the second (discussion/debate between principal architects of ceph and glusterfs.)
 
+1 for ceph :)

(for me, glusterfs (for kvm) is just an hack. no snapshot,no clone, when a node goes down and up, the full vm file need to be read to be resync, no multiple server mount,.....)

I think me preferences in order:

1) ceph
2) sheepdog
3) glusterfs
 
I would say

0) ZFS
1) ceph
2)
sheepdog


ZFS is not distributed/replicated storage. I assume that is the reason why spirit does not mention it in this thread.
 
Is there anything available for active-passive configuration? I'm looking for a solution to replicate the OS disk across 3 disks. All disks are the same, and they are in the same node.
Issue:
I just purchased this http://www.ebay.com/itm/300889121182?ssPageName=STRK:MESINDXX:IT&_trksid=p3984.m1436.l2649 [h=2]6026TT-BTRF SuperServer with four X8DTT-F nodes[/h]I need the PCI-e slot for additional NICs.
I only have one slot, and cannot run Hardware RAID
I do not want to use software raid solution because it is not supported and I will be purchasing support from Proxmox (Do not want any issues at support time).
Could I use CEPH, or DRDB for a solution to this problem. It doesn't even need to be failover. I will have failover between the nodes. I would like to be able to set a new disk for boot, and have the same PVE running. (A matter of removing hotswap disk, and turning the node back on)
 
Hi, I don't known if it's possible to boot a proxmox host on ceph.( and you need external ceph storage, so not sure it's help).
I'm not a big of fan of complex storage for proxmox host system partition.
Go fo raid, and if you don't want it , why not a btrfs mirror ?
 
Can a Proxmox node act as a ceph storage and client node or do those functions need to be different servers?

Yes, a node can act as a ceph storage, but nobody really suggests to use it that way (only for testing).
 
Re: Understanding ceph near full / full ratio

hi!

we managed ceph storage working with proxmox.

just one thing remains still unclear:

we have for example 3 storage nodes with 5 hdd's, one for system 4 for storage a 10 gb totalling 120 gb over all 3 nodes.

regarding the near full / full ratio / replica:

we want that one node can fail completely but the storage cluster should remain working.

so what shall full ratio be? 0,66 or 0,65? and replica 2 or 3?

best regards

karl
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!