ceph: hdd degrade ssd performance

cola16

Member
Feb 2, 2024
34
2
8
I'm splitting my HDDs and SSDs.
I used the CLASS of OSD and the CRUSH RULE of POOL.

When I copy files from HDD to HDD, it slows down all ceph cluster including SSD.
Is this working as intended?

Or is my setup wrong?

It certainly seems to be reflected in the remaining capacity,
but I can't separate the performance at all.
 
here is the crushmap
```
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54

# devices
device 0 osd.0 class ssd
device 2 osd.2 class ssd
device 4 osd.4 class ssd
device 5 osd.5 class ssd
device 6 osd.6 class hdd
device 7 osd.7 class hdd
device 8 osd.8 class hdd
device 9 osd.9 class hdd
device 10 osd.10 class ssd
device 11 osd.11 class hdd
device 12 osd.12 class hdd

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 zone
type 10 region
type 11 root

# buckets
host pve1 {
id -3 # do not change unnecessarily
id -13 class ssd # do not change unnecessarily
id -9 class hdd # do not change unnecessarily
id -2 class nvme4 # do not change unnecessarily
# weight 8.96484
alg straw2
hash 0 # rjenkins1
item osd.8 weight 3.54279
item osd.7 weight 3.54648
item osd.2 weight 0.93779
item osd.10 weight 0.93779
}
host pve2 {
id -5 # do not change unnecessarily
id -14 class ssd # do not change unnecessarily
id -10 class hdd # do not change unnecessarily
id -4 class nvme4 # do not change unnecessarily
# weight 7.39078
alg straw2
hash 0 # rjenkins1
item osd.4 weight 0.93779
item osd.5 weight 0.93779
item osd.6 weight 2.75760
item osd.9 weight 2.75760
}
host pve3 {
id -6 # do not change unnecessarily
id -7 class ssd # do not change unnecessarily
id -11 class hdd # do not change unnecessarily
id -15 class nvme4 # do not change unnecessarily
# weight 7.33459
alg straw2
hash 0 # rjenkins1
item osd.12 weight 2.75760
item osd.11 weight 2.75760
item osd.0 weight 1.81940
}
root default {
id -1 # do not change unnecessarily
id -16 class ssd # do not change unnecessarily
id -12 class hdd # do not change unnecessarily
id -8 class nvme4 # do not change unnecessarily
# weight 22.75241
alg straw2
hash 0 # rjenkins1
item pve1 weight 8.02704
item pve2 weight 7.39078
item pve3 weight 7.33459
}

# rules
rule replicated_rule {
id 0
type replicated
step take default
step chooseleaf firstn 0 type host
step emit
}
rule replicated_nvme {
id 1
type replicated
step take default class nvme4
step chooseleaf firstn 0 type host
step emit
}
rule replicated_hdd {
id 2
type replicated
step take default class hdd
step chooseleaf firstn 0 type host
step emit
}
rule replicated_ssd {
id 3
type replicated
step take default class ssd
step chooseleaf firstn 0 type host
step emit
}
rule osdpol_rbd_diskimage-ec-data {
id 4
type erasure
step set_chooseleaf_tries 5
step set_choose_tries 100
step take default class hdd
step choose indep 0 type osd
step emit
}
rule cephfs-userdata-hdd_ec {
id 5
type erasure
step set_chooseleaf_tries 5
step set_choose_tries 100
step take default class hdd
step choose indep 0 type osd
step emit
}

# end crush map
```
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!