Hi
This seems to be the best place / forum to ask question about ceph
My understanding of ceph is the underlying storage is OSD, these are distributed between nodes.
Pools are then created that sit on top of OSD's ... i think OSD's are broken into PG and PG are assigned to Pools I think ..
I think with a crush map you can say which osd's belong to which pool.
Already getting complicated. lets presume I have the stock standard crush map/rule.
standard replication rule is 3 copies , so for each block of info (or pg ?) there are 3 copies - not on the same OSD and truing to not be on the same host. sounds all good.
I have my proxmox cluster with ceph installed, i have 6 nodes - but 4 of them are old machine and I have 1 large newer machine .. so I want to get to 3 nodes of proxmox (the 3 nodes all have 10G)
I also have a large amount of data , this is split into media created by me /family - photo's video ets probably say 1T not that much
then I have dvd /blu ray rips which i have online - so say roughly 15T .
I currently have all of this on cephfs .
Cephfs this is filesystem thats sits on top of 2 pools. So ceph provides RDB (?) which proxmox can talk to directly and also provide cephfs
So my 1 large cephfs has 18T of data so 54T of space used (3x). what I would like to do is create a new cephfs just for the large media and just make it x2 ..... to get some space back
My concern if I have a cephfs called cfs_large - with 2 replica min (instead of 3), what happens if I lose 2 OSD's that happen to be the mirrors of each other. do I lose all of the cephfs or do I just lose what was on those OSD. if the chance is I might lose all of the 17T ... then probably best to stick to 3x
Then the follow up question whats happens to the RDB / the pool if I lose 3 osd that constitutes a mirror set. is the whole pool lost ?
should I be creating smaller pools instead of 1 big pool ??
Sorry got the long winded way to get there - hope it makes sense
This seems to be the best place / forum to ask question about ceph

My understanding of ceph is the underlying storage is OSD, these are distributed between nodes.
Pools are then created that sit on top of OSD's ... i think OSD's are broken into PG and PG are assigned to Pools I think ..
I think with a crush map you can say which osd's belong to which pool.
Already getting complicated. lets presume I have the stock standard crush map/rule.
standard replication rule is 3 copies , so for each block of info (or pg ?) there are 3 copies - not on the same OSD and truing to not be on the same host. sounds all good.
I have my proxmox cluster with ceph installed, i have 6 nodes - but 4 of them are old machine and I have 1 large newer machine .. so I want to get to 3 nodes of proxmox (the 3 nodes all have 10G)
I also have a large amount of data , this is split into media created by me /family - photo's video ets probably say 1T not that much
then I have dvd /blu ray rips which i have online - so say roughly 15T .
I currently have all of this on cephfs .
Cephfs this is filesystem thats sits on top of 2 pools. So ceph provides RDB (?) which proxmox can talk to directly and also provide cephfs
So my 1 large cephfs has 18T of data so 54T of space used (3x). what I would like to do is create a new cephfs just for the large media and just make it x2 ..... to get some space back
My concern if I have a cephfs called cfs_large - with 2 replica min (instead of 3), what happens if I lose 2 OSD's that happen to be the mirrors of each other. do I lose all of the cephfs or do I just lose what was on those OSD. if the chance is I might lose all of the 17T ... then probably best to stick to 3x
Then the follow up question whats happens to the RDB / the pool if I lose 3 osd that constitutes a mirror set. is the whole pool lost ?
should I be creating smaller pools instead of 1 big pool ??
Sorry got the long winded way to get there - hope it makes sense