[SOLVED] Howto define OSD weight in Crush map

cmonty14

Well-Known Member
Mar 4, 2014
343
5
58
Hi,
after adding an OSD to Ceph it is adviseable to create a relevant entry in Crush map using a weight size depending on disk size.

Example:
ceph osd crush set osd.<id> <weight> root=default host=<hostname>

Question:
How is the weight defined depending on disk size?
Which algorithm can be used to calculate the weight?

From my original Ceph installation, means PVE 5 + Ceph luminous, the Crush map entry for
- HDD device with 1.80TB size (output of lsscsi -s) the weight was 1.627229
- NVMI device with 3.20TB size (output of lsscsi -s) the weight was 2.910889

THX
 
after adding an OSD to Ceph it is adviseable to create a relevant entry in Crush map using a weight size depending on disk size.
The entry is usually made automatically.

How is the weight defined depending on disk size?
Which algorithm can be used to calculate the weight?
The weight is the size in TB (eg. 4TB ~ 4.0). The actual used size can differ a little bit, hence the uneven numbers.
 
OK.
I created a new OSD (from scratch), but there's no relevant entry in crush map except for device 8 osd.8 class hdd in section "devices.

root@ld5505:~# pveceph osd create /dev/sdbm --db_dev /dev/sdbk --db_size 10
create OSD on /dev/sdbm (bluestore)
creating block.db on '/dev/sdbk'
Physical volume "/dev/sdbk" successfully created.
Volume group "ceph-1b9ee177-ecd7-4506-94f3-9a7c06d075b4" successfully created
Logical volume "osd-db-bbf06c83-a9f7-4786-9c47-d71e1625de80" created.
using 'ceph-1b9ee177-ecd7-4506-94f3-9a7c06d075b4/osd-db-bbf06c83-a9f7-4786-9c47-d71e1625de80' for block.db
wipe disk/partition: /dev/sdbm
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.925545 s, 227 MB/s
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b6484e06-c580-40d7-82be-f53b7d824ab4
Running command: /sbin/vgcreate -s 1G --force --yes ceph-9acb6cc5-bec3-4223-ad8a-61e9601bbaec /dev/sdbm
stdout: Physical volume "/dev/sdbm" successfully created.
stdout: Volume group "ceph-9acb6cc5-bec3-4223-ad8a-61e9601bbaec" successfully created
Running command: /sbin/lvcreate --yes -l 100%FREE -n osd-block-b6484e06-c580-40d7-82be-f53b7d824ab4 ceph-9acb6cc5-bec3-4223-ad8a-61e9601bbaec
stdout: Logical volume "osd-block-b6484e06-c580-40d7-82be-f53b7d824ab4" created.
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-8
--> Absolute path not found for executable: restorecon
--> Ensure $PATH environment variable contains common executable locations
Running command: /bin/chown -h ceph:ceph /dev/ceph-9acb6cc5-bec3-4223-ad8a-61e9601bbaec/osd-block-b6484e06-c580-40d7-82be-f53b7d824ab4
Running command: /bin/chown -R ceph:ceph /dev/dm-41
Running command: /bin/ln -s /dev/ceph-9acb6cc5-bec3-4223-ad8a-61e9601bbaec/osd-block-b6484e06-c580-40d7-82be-f53b7d824ab4 /var/lib/ceph/osd/ceph-8/block
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-8/activate.monmap
stderr: 2019-09-03 15:53:38.588 7ffa11b17700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.bootstrap-osd.keyring: (2) No such file or directory
2019-09-03 15:53:38.588 7ffa11b17700 -1 AuthRegistry(0x7ffa0c07f818) no keyring found at /etc/pve/priv/ceph.client.bootstrap-osd.keyring, disabling cephx
stderr: got monmap epoch 5
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-8/keyring --create-keyring --name osd.8 --add-key AQBXcG5dFmYdJRAAopdFyDgkyY/TsOar+3+ltA==
stdout: creating /var/lib/ceph/osd/ceph-8/keyring
added entity osd.8 auth(key=AQBXcG5dFmYdJRAAopdFyDgkyY/TsOar+3+ltA==)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8/
Running command: /bin/chown -h ceph:ceph /dev/ceph-1b9ee177-ecd7-4506-94f3-9a7c06d075b4/osd-db-bbf06c83-a9f7-4786-9c47-d71e1625de80
Running command: /bin/chown -R ceph:ceph /dev/dm-16
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 8 --monmap /var/lib/ceph/osd/ceph-8/activate.monmap --keyfile - --bluestore-block-db-path /dev/ceph-1b9ee177-ecd7-4506-94f3-9a7c06d075b4/osd-db-bbf06c83-a9f7-4786-9c47-d71e1625de80 --osd-data /var/lib/ceph/osd/ceph-8/ --osd-uuid b6484e06-c580-40d7-82be-f53b7d824ab4 --setuser ceph --setgroup ceph
--> ceph-volume lvm prepare successful for: /dev/sdbm
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-9acb6cc5-bec3-4223-ad8a-61e9601bbaec/osd-block-b6484e06-c580-40d7-82be-f53b7d824ab4 --path /var/lib/ceph/osd/ceph-8 --no-mon-config
Running command: /bin/ln -snf /dev/ceph-9acb6cc5-bec3-4223-ad8a-61e9601bbaec/osd-block-b6484e06-c580-40d7-82be-f53b7d824ab4 /var/lib/ceph/osd/ceph-8/block
Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-8/block
Running command: /bin/chown -R ceph:ceph /dev/dm-41
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8
Running command: /bin/ln -snf /dev/ceph-1b9ee177-ecd7-4506-94f3-9a7c06d075b4/osd-db-bbf06c83-a9f7-4786-9c47-d71e1625de80 /var/lib/ceph/osd/ceph-8/block.db
Running command: /bin/chown -h ceph:ceph /dev/ceph-1b9ee177-ecd7-4506-94f3-9a7c06d075b4/osd-db-bbf06c83-a9f7-4786-9c47-d71e1625de80
Running command: /bin/chown -R ceph:ceph /dev/dm-16
Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-8/block.db
Running command: /bin/chown -R ceph:ceph /dev/dm-16
Running command: /bin/systemctl enable ceph-volume@lvm-8-b6484e06-c580-40d7-82be-f53b7d824ab4
stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-8-b6484e06-c580-40d7-82be-f53b7d824ab4.service -> /lib/systemd/system/ceph-volume@.service.
Running command: /bin/systemctl enable --runtime ceph-osd@8
Running command: /bin/systemctl start ceph-osd@8
--> ceph-volume lvm activate successful for osd ID: 8
--> ceph-volume lvm create successful for: /dev/sdbm


Therefore I was asking how the weight is defined because I need to adjust the crush map manually.
 
I created a new OSD (from scratch), but there's no relevant entry in crush map except for device 8 osd.8 class hdd in section "devices.
This is usually at the top of the crushmap, the OSD.8 will show up under the host bucket again and there it will have the weight. Also you can set the weight without touching the curshmap, eg. ceph osd crush reweight osd.0 0.02999.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!