Ok I found how to do it :
First remove the faulty node from the cluster : pvecm delnode pve2
Then add it back from the GUI with "Join Information" .
Well now it's called "pve" instead of "pve2" but it's working :
I was running a 3 nodes cluster perfectly fine then this week end one of my nodes couldn't boot. From what I've seen, it was missing a system file and was stuck. I installed PVE again and upgraded it to 5.3-8 and I want to join the cluster again but it seems I can't send the infos :
I found why ceph prompted an error message : at the beginning of of crush map, I need to add pool to the types of buckets :
# types type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root
type 11 pool
Thanks Alwin !
From what I've read, the data placement and replication wich such scenario would be awful.
This article looks promising, without the cons of the uneven replication of the link you posted.
Yeah, so you can make a pool wich targets a specific class (HDD, SSD, NVME) but you can't specifically target an OSD by his ID or his name in the CRUSH map wich makes sense since an average production cluster hosts way more than 10 OSD.
By the way SAS and SATA hard drives would still be...
Seems like a good idea since running applications that needs high I/O on spinning disks would be nonsense. I thought about something like this :
Obviously I'll need to modify the CRUSH map to map a specific pool to a specific type of OSD.
I found what I was looking for...
So for now I think I'll stick to hard drives and maybe SSDs for journals.
But I wonder what is written in those journals. Is it logs, metadata, will storing journals on the OSD really inpact the performances ?
By the way, is cache tiering interesting with replicated pools (I will store VMs on it ) since I'll mostly use hard drives (maybe SSDs for logs) ?
I may invest in few SSDs to create a cache tiering pool to increase performances.