Ceph EC across OSDs


May 8, 2023
I'm looking into different ways I could use ceph across my limited hardware.

I have 3 servers (could stretch to 4) plus a bunch of mini pcs

I have two distinct pools of data, media (non-critical) and VM services (critical)

I was thinking of setting up ceph across 20-24 HDD osds, using an enterprise SSD dp/wal in each box.

I'll run ceph fs for the media and rbd for the VMs.

Looking into cruch maps and pool design I'm thinking of optimising storage capacity by using EC for the media pool with the OSD as failure domain. It doesn't matter if the files go offline if a server is down. I'm planning 8+3 stripes.

For the VMs I'll just keep 3/2 replicated pool.

I'll need to share cephfs pool over smb for access from windows hosts. For docker I'll either use smb or cephfs volumes.

Is this a workable strategy? I've not done much with ceph but keen to move away from truenas.
Do not use OSD as the failure domain. You will lose data. You will have data unavailable in case of maintenance of one of the hosts.
Totally understand that, exactly why I'm planning to do that for the non critical data. Could loose that data tomorrow and I wouldn't care. The vm storage I do care about though.

I'm trying to understand, in the case I have separate pools (crush maps) one being osd level ECX and the other being host level 3/2 replication, the replicated pool would survive a host failure whereas the ec pool would suffer data loss/unavailability. Am I correct in my understanding?


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!