Ok I found how to do it :
First remove the faulty node from the cluster : pvecm delnode pve2
Then add it back from the GUI with "Join Information" .
Well now it's called "pve" instead of "pve2" but it's working :
Hello,
I was running a 3 nodes cluster perfectly fine then this week end one of my nodes couldn't boot. From what I've seen, it was missing a system file and was stuck. I installed PVE again and upgraded it to 5.3-8 and I want to join the cluster again but it seems I can't send the infos :
The...
I found why ceph prompted an error message : at the beginning of of crush map, I need to add pool to the types of buckets :
# types type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root
type 11 pool
Thanks Alwin !
From what I've read, the data placement and replication wich such scenario would be awful.
https://www.sebastien-han.fr/blog/2012/12/07/ceph-2-speed-storage-with-crush/
This article looks promising, without the cons of the uneven replication of the link you posted.
Since OSD...
Is it possible to select on which OSD your pool will be affected (in the case I stay with a hard drive only cluster) ?
Targeting an host instead of a class would be good enough.
Yeah, so you can make a pool wich targets a specific class (HDD, SSD, NVME) but you can't specifically target an OSD by his ID or his name in the CRUSH map wich makes sense since an average production cluster hosts way more than 10 OSD.
By the way SAS and SATA hard drives would still be...
Asides from a SSD only pool, is possible when creating a pool to target specifically some OSDs ? Or does it goes against the fundamental princip of CEPH wich is to dynamically rebalance data ?
rule replicated_ssd {
id 2
type replicated
min_size 1
max_size 10
step take default class ssd
step chooseleaf firstn 0 type host
step emit
}
I'm not sure about what the two lines in red does.
Seems like a good idea since running applications that needs high I/O on spinning disks would be nonsense. I thought about something like this :
Obviously I'll need to modify the CRUSH map to map a specific pool to a specific type of OSD.
I found what I was looking for...
So for now I think I'll stick to hard drives and maybe SSDs for journals.
But I wonder what is written in those journals. Is it logs, metadata, will storing journals on the OSD really inpact the performances ?
Alright.
By the way, is cache tiering interesting with replicated pools (I will store VMs on it ) since I'll mostly use hard drives (maybe SSDs for logs) ?
I may invest in few SSDs to create a cache tiering pool to increase performances.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.