crushmap

  1. S

    Proxmox 4 nodes cluster and CEPH EC 4+2 with Custom Crush Rule

    Dear Community. We are currently in the process of building a Proxmox cluster with Ceph as the underlying storage. We have 4 nodes, with 4 x 100TB OSDs attached to each node (16 OSDs total) and plan to scale this out by adding another 4 nodes with the same number of OSDs on attached to each...
  2. B

    Question about CEPH Topology

    Hi everyone, I would like some help regarding CEPH topology. I have the following environment: - 5x Servers (PVE01,02,03,04,05) - PVE 01,02, and 03 in one datacenter and PVE04 and 05 in another datacenter. - 6x Disks in each (3x HDD and 3x SSD) - All of the same capacity/model. I would like...
  3. I

    PVE Ceph Rules for HDD Pools of Different Sizes

    I apologize as I am sure this question has probably been answered a thousand times but I cannot find appropriate documentation and I'm still too new on my Proxmox/Ceph journey to apparently get the right google terms. I have a 3 node Proxmox Cluster with all HDD Disks and an already setup...
  4. M

    Ceph CRUSH Map Reset nach Neustart

    Hallo zusammen, ich nutze PVE seit über 5 Jahren zu Hause für etliche VMs, bisher mit ZFS. Als Host läuft ein HP DL380P Gen8 mit 56GB RAM. Das funktioniert soweit einwandfrei. Nun zu meinem Vorhaben: Ich habe mich nun etwas in Ceph eingelesen und wollte mal testen ob das als Storage Pool bei...
  5. K

    CEPH replication to different hosts instead of OSDs (PVE 6.2)

    Hello, Could you please advice on how to safely change the replica's to be on different hosts, instead of OSDs for next crush map (PVE 6.2): # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1...
  6. C

    [SOLVED] Howto define OSD weight in Crush map

    Hi, after adding an OSD to Ceph it is adviseable to create a relevant entry in Crush map using a weight size depending on disk size. Example: ceph osd crush set osd.<id> <weight> root=default host=<hostname> Question: How is the weight defined depending on disk size? Which algorithm can be...
  7. V

    Ceph pool tweaking....

    I have managed to succesfully deploy a test cluster over three sites connected with 100mbit fiber. All three nodes have 3 osds each, and there is the default pools for cephfs-data and cephfs-metadata. The performance of this one pool stretched across the 100mbit links is low but acceptable for...
  8. C

    [SOLVED] Ceph: creating pool for SSD only

    Hi, in case I want to create a pool with SSDs only that is separated from HDDs I need to manipulate the CRUSH map and enter another root. Is my assumption correct? THX
  9. R

    re: crushmap oops

    Little bit a ceph beginner here. I followed the directions from Sébastien Han and built out a ceph crushmap with HDD and SSD in the same box. There are 8 nodes each contributing an SSD and an HDD. I only noticed after putting some data on there I goofed and put a single HDD in the SSD group...
  10. J

    Creating/Using Multiple Ceph Pools

    Hardware configuration: 6 systems in a ProxMox cluster. Works fine. Each system has two 6TB "storage" disks, for a total of 12 disks. One 6TB physical disk on each system is of storage class "A" (e.g. Self-encrypting fast spinner) One 6TB physical disk on each system is of storage class "B"...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!