Recent content by Yvon

  1. Can't join a cluster

    Thank you Chris, it's working perfectly fine now.
  2. Can't join a cluster

    Ok I found how to do it : First remove the faulty node from the cluster : pvecm delnode pve2 Then add it back from the GUI with "Join Information" . Well now it's called "pve" instead of "pve2" but it's working :
  3. Can't join a cluster

    Hello, I was running a 3 nodes cluster perfectly fine then this week end one of my nodes couldn't boot. From what I've seen, it was missing a system file and was stuck. I installed PVE again and upgraded it to 5.3-8 and I want to join the cluster again but it seems I can't send the infos : The...
  4. CEPH placement group and storage usefull capacity

    I found why ceph prompted an error message : at the beginning of of crush map, I need to add pool to the types of buckets : # types type 0 osd type 1 host type 2 chassis type 3 rack type 4 row type 5 pdu type 6 pod type 7 room type 8 datacenter type 9 region type 10 root type 11 pool
  5. CEPH placement group and storage usefull capacity

    I tried to agregate OSD by buckets likes this : pool ssd { id -9 alg straw2 hash 0 # rjenkins1 item osd.0 weight 0.455 item osd.2 weight 0.454 } pool sas { id -10 alg straw2 hash 0 # rjenkins1 item osd.1 weight 0.455 item osd.3 weight 0.454 } with a...
  6. CEPH placement group and storage usefull capacity

    Thanks Alwin ! From what I've read, the data placement and replication wich such scenario would be awful. https://www.sebastien-han.fr/blog/2012/12/07/ceph-2-speed-storage-with-crush/ This article looks promising, without the cons of the uneven replication of the link you posted. Since OSD...
  7. CEPH placement group and storage usefull capacity

    Is it possible to select on which OSD your pool will be affected (in the case I stay with a hard drive only cluster) ? Targeting an host instead of a class would be good enough.
  8. CEPH placement group and storage usefull capacity

    Yeah, so you can make a pool wich targets a specific class (HDD, SSD, NVME) but you can't specifically target an OSD by his ID or his name in the CRUSH map wich makes sense since an average production cluster hosts way more than 10 OSD. By the way SAS and SATA hard drives would still be...
  9. ceph df: pools

    Hi ! From what I've seen USED should be the place taken by your VMs and other rbd objects. MAX AVAIL should be (AVAIL - USED).
  10. CEPH placement group and storage usefull capacity

    Asides from a SSD only pool, is possible when creating a pool to target specifically some OSDs ? Or does it goes against the fundamental princip of CEPH wich is to dynamically rebalance data ?
  11. CEPH placement group and storage usefull capacity

    rule replicated_ssd { id 2 type replicated min_size 1 max_size 10 step take default class ssd step chooseleaf firstn 0 type host step emit } I'm not sure about what the two lines in red does.
  12. CEPH placement group and storage usefull capacity

    Seems like a good idea since running applications that needs high I/O on spinning disks would be nonsense. I thought about something like this : Obviously I'll need to modify the CRUSH map to map a specific pool to a specific type of OSD. I found what I was looking for...
  13. CEPH placement group and storage usefull capacity

    So for now I think I'll stick to hard drives and maybe SSDs for journals. But I wonder what is written in those journals. Is it logs, metadata, will storing journals on the OSD really inpact the performances ?
  14. CEPH placement group and storage usefull capacity

    Nevermind : I found that yesterday a few minutes after posting my question...
  15. CEPH placement group and storage usefull capacity

    Alright. By the way, is cache tiering interesting with replicated pools (I will store VMs on it ) since I'll mostly use hard drives (maybe SSDs for logs) ? I may invest in few SSDs to create a cache tiering pool to increase performances.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!