I'm looking into different ways I could use ceph across my limited hardware.
I have 3 servers (could stretch to 4) plus a bunch of mini pcs
I have two distinct pools of data, media (non-critical) and VM services (critical)
I was thinking of setting up ceph across 20-24 HDD osds, using an enterprise SSD dp/wal in each box.
I'll run ceph fs for the media and rbd for the VMs.
Looking into cruch maps and pool design I'm thinking of optimising storage capacity by using EC for the media pool with the OSD as failure domain. It doesn't matter if the files go offline if a server is down. I'm planning 8+3 stripes.
For the VMs I'll just keep 3/2 replicated pool.
I'll need to share cephfs pool over smb for access from windows hosts. For docker I'll either use smb or cephfs volumes.
Is this a workable strategy? I've not done much with ceph but keen to move away from truenas.
I have 3 servers (could stretch to 4) plus a bunch of mini pcs
I have two distinct pools of data, media (non-critical) and VM services (critical)
I was thinking of setting up ceph across 20-24 HDD osds, using an enterprise SSD dp/wal in each box.
I'll run ceph fs for the media and rbd for the VMs.
Looking into cruch maps and pool design I'm thinking of optimising storage capacity by using EC for the media pool with the OSD as failure domain. It doesn't matter if the files go offline if a server is down. I'm planning 8+3 stripes.
For the VMs I'll just keep 3/2 replicated pool.
I'll need to share cephfs pool over smb for access from windows hosts. For docker I'll either use smb or cephfs volumes.
Is this a workable strategy? I've not done much with ceph but keen to move away from truenas.