I apologize as I am sure this question has probably been answered a thousand times but I cannot find appropriate documentation and I'm still too new on my Proxmox/Ceph journey to apparently get the right google terms.
I have a 3 node Proxmox Cluster with all HDD Disks and an already setup...
ich nutze PVE seit über 5 Jahren zu Hause für etliche VMs, bisher mit ZFS. Als Host läuft ein HP DL380P Gen8 mit 56GB RAM.
Das funktioniert soweit einwandfrei.
Nun zu meinem Vorhaben:
Ich habe mich nun etwas in Ceph eingelesen und wollte mal testen ob das als Storage Pool bei...
Could you please advice on how to safely change the replica's to be on different hosts, instead of OSDs for next crush map (PVE 6.2):
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1...
after adding an OSD to Ceph it is adviseable to create a relevant entry in Crush map using a weight size depending on disk size.
ceph osd crush set osd.<id> <weight> root=default host=<hostname>
How is the weight defined depending on disk size?
Which algorithm can be...
I have managed to succesfully deploy a test cluster over three sites
connected with 100mbit fiber.
All three nodes have 3 osds each, and there is the default pools for cephfs-data and cephfs-metadata.
The performance of this one pool stretched across the 100mbit links is low but acceptable for...
Little bit a ceph beginner here.
I followed the directions from Sébastien Han and built out a ceph crushmap with HDD and SSD in the same box. There are 8 nodes each contributing an SSD and an HDD.
I only noticed after putting some data on there I goofed and put a single HDD in the SSD group...
6 systems in a ProxMox cluster. Works fine.
Each system has two 6TB "storage" disks, for a total of 12 disks.
One 6TB physical disk on each system is of storage class "A" (e.g. Self-encrypting fast spinner)
One 6TB physical disk on each system is of storage class "B"...