crushmap

  1. I

    PVE Ceph Rules for HDD Pools of Different Sizes

    I apologize as I am sure this question has probably been answered a thousand times but I cannot find appropriate documentation and I'm still too new on my Proxmox/Ceph journey to apparently get the right google terms. I have a 3 node Proxmox Cluster with all HDD Disks and an already setup...
  2. M

    Ceph CRUSH Map Reset nach Neustart

    Hallo zusammen, ich nutze PVE seit über 5 Jahren zu Hause für etliche VMs, bisher mit ZFS. Als Host läuft ein HP DL380P Gen8 mit 56GB RAM. Das funktioniert soweit einwandfrei. Nun zu meinem Vorhaben: Ich habe mich nun etwas in Ceph eingelesen und wollte mal testen ob das als Storage Pool bei...
  3. K

    CEPH replication to different hosts instead of OSDs (PVE 6.2)

    Hello, Could you please advice on how to safely change the replica's to be on different hosts, instead of OSDs for next crush map (PVE 6.2): # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1...
  4. C

    [SOLVED] Howto define OSD weight in Crush map

    Hi, after adding an OSD to Ceph it is adviseable to create a relevant entry in Crush map using a weight size depending on disk size. Example: ceph osd crush set osd.<id> <weight> root=default host=<hostname> Question: How is the weight defined depending on disk size? Which algorithm can be...
  5. V

    Ceph pool tweaking....

    I have managed to succesfully deploy a test cluster over three sites connected with 100mbit fiber. All three nodes have 3 osds each, and there is the default pools for cephfs-data and cephfs-metadata. The performance of this one pool stretched across the 100mbit links is low but acceptable for...
  6. C

    [SOLVED] Ceph: creating pool for SSD only

    Hi, in case I want to create a pool with SSDs only that is separated from HDDs I need to manipulate the CRUSH map and enter another root. Is my assumption correct? THX
  7. R

    re: crushmap oops

    Little bit a ceph beginner here. I followed the directions from Sébastien Han and built out a ceph crushmap with HDD and SSD in the same box. There are 8 nodes each contributing an SSD and an HDD. I only noticed after putting some data on there I goofed and put a single HDD in the SSD group...
  8. J

    Creating/Using Multiple Ceph Pools

    Hardware configuration: 6 systems in a ProxMox cluster. Works fine. Each system has two 6TB "storage" disks, for a total of 12 disks. One 6TB physical disk on each system is of storage class "A" (e.g. Self-encrypting fast spinner) One 6TB physical disk on each system is of storage class "B"...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!