Hi,
I have configured a 3-node-cluster with currently 10 OSDs.
root@ld4257:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-10 43.66196 root hdd_strgbox
-27 0 host ld4257-hdd_strgbox
-28 21.83098 host ld4464-hdd_strgbox
3 hdd 7.27699 osd.3 up 1.00000 1.00000
4 hdd 7.27699 osd.4 up 1.00000 1.00000
5 hdd 7.27699 osd.5 up 1.00000 1.00000
-29 21.83098 host ld4465-hdd_strgbox
6 hdd 7.27699 osd.6 up 1.00000 1.00000
7 hdd 7.27699 osd.7 up 1.00000 1.00000
8 hdd 7.27699 osd.8 up 1.00000 1.00000
-1 3.26999 root default
-3 3.26999 host ld4257
0 hdd 1.09000 osd.0 up 1.00000 1.00000
1 hdd 1.09000 osd.1 up 1.00000 1.00000
2 hdd 1.09000 osd.2 up 1.00000 1.00000
-7 0 host ld4464
-5 0 host ld4465
9 hdd 0 osd.9 up 1.00000 1.00000
The OSDs are different in size and therefore I defined additional bucket in CRUSH map.
In addition I created 2 rules; one representing all HDDs belonging to storage box, one representing all other HDDs.
For your reference I have attached the active CRUSH map.
Then I created a pool named "fio" using rule "hdd_rule".
root@ld4257:/home/fio# ceph osd lspools
14 hddloc,15 fio,16 benchmark,
hh
root@ld4257:/home/fio# ceph osd pool set fio crush_rule hdd_rule
set pool 15 crush_rule to hdd_rule
After this I created a block device for pool fio:
root@ld4257:/home/fio# rbd pool init fio
And with the last step I wanted to create a block device image. However, this command is hanging and I must kill it:
root@ld4257:/home/fio# rbd create --size 200G fio/test
^C
root@ld4257:/home/fio#
What's causing this issue?
I have successfully created another block device image in a pool that is residing on OSDs belonging to hdd_strgbox.
root@ld4257:/home/fio# rbd showmapped
id pool image snap device
0 benchmark block_device - /dev/rbd0
THX
I have configured a 3-node-cluster with currently 10 OSDs.
root@ld4257:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-10 43.66196 root hdd_strgbox
-27 0 host ld4257-hdd_strgbox
-28 21.83098 host ld4464-hdd_strgbox
3 hdd 7.27699 osd.3 up 1.00000 1.00000
4 hdd 7.27699 osd.4 up 1.00000 1.00000
5 hdd 7.27699 osd.5 up 1.00000 1.00000
-29 21.83098 host ld4465-hdd_strgbox
6 hdd 7.27699 osd.6 up 1.00000 1.00000
7 hdd 7.27699 osd.7 up 1.00000 1.00000
8 hdd 7.27699 osd.8 up 1.00000 1.00000
-1 3.26999 root default
-3 3.26999 host ld4257
0 hdd 1.09000 osd.0 up 1.00000 1.00000
1 hdd 1.09000 osd.1 up 1.00000 1.00000
2 hdd 1.09000 osd.2 up 1.00000 1.00000
-7 0 host ld4464
-5 0 host ld4465
9 hdd 0 osd.9 up 1.00000 1.00000
The OSDs are different in size and therefore I defined additional bucket in CRUSH map.
In addition I created 2 rules; one representing all HDDs belonging to storage box, one representing all other HDDs.
For your reference I have attached the active CRUSH map.
Then I created a pool named "fio" using rule "hdd_rule".
root@ld4257:/home/fio# ceph osd lspools
14 hddloc,15 fio,16 benchmark,
hh
root@ld4257:/home/fio# ceph osd pool set fio crush_rule hdd_rule
set pool 15 crush_rule to hdd_rule
After this I created a block device for pool fio:
root@ld4257:/home/fio# rbd pool init fio
And with the last step I wanted to create a block device image. However, this command is hanging and I must kill it:
root@ld4257:/home/fio# rbd create --size 200G fio/test
^C
root@ld4257:/home/fio#
What's causing this issue?
I have successfully created another block device image in a pool that is residing on OSDs belonging to hdd_strgbox.
root@ld4257:/home/fio# rbd showmapped
id pool image snap device
0 benchmark block_device - /dev/rbd0
THX