ceph io very very low

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
4,617
430
88

haiwan

Member
Apr 23, 2019
220
1
18
33
This creates a partition for the OSD on sd<Y>, you need to run it for each command. Also you might want to increase the size of the DB/WAL in the ceph.conf if needed.
http://docs.ceph.com/docs/luminous/rados/configuration/bluestore-config-ref/#sizing

Code:
ceph daemon osd.0 perf dump | grep bluestore
Here you can see some stats for a OSD. Also run a rados bench again and compare the results.
we no understand
you see , we every node have 2 hdd 1ssd
so 1ssd how to support 2 hdd use ?
 

haiwan

Member
Apr 23, 2019
220
1
18
33
This creates a partition for the OSD on sd<Y>, you need to run it for each command. Also you might want to increase the size of the DB/WAL in the ceph.conf if needed.
http://docs.ceph.com/docs/luminous/rados/configuration/bluestore-config-ref/#sizing

Code:
ceph daemon osd.0 perf dump | grep bluestore
Here you can see some stats for a OSD. Also run a rados bench again and compare the results.
Hi, alwin.please see this we understand right.
in SSD creates a partition a b c...
so partition a support osd hdd1
partition b support osd hdd2
partition c support osd hdd3
so this right ?
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
4,617
430
88
Hi, alwin.please see this we understand right.
in SSD creates a partition a b c...
so partition a support osd hdd1
partition b support osd hdd2
partition c support osd hdd3
so this right ?
If I understand you right, then yes. You can verify this by looking into '/var/lib/ceph/osd/ceph-<ID>/' and see which UUID is connected.
 

haiwan

Member
Apr 23, 2019
220
1
18
33
If I understand you right, then yes. You can verify this by looking into '/var/lib/ceph/osd/ceph-<ID>/' and see which UUID is connected.
hi
alwin, have demo let me try study.
we no history
 

haiwan

Member
Apr 23, 2019
220
1
18
33
This creates a partition for the OSD on sd<Y>, you need to run it for each command. Also you might want to increase the size of the DB/WAL in the ceph.conf if needed.
http://docs.ceph.com/docs/luminous/rados/configuration/bluestore-config-ref/#sizing

Code:
ceph daemon osd.0 perf dump | grep bluestore
Here you can see some stats for a OSD. Also run a rados bench again and compare the results.
Hi wlwin
please give me This creates a partition for the OSD on sd<Y> code ok?
we have no understand
 

haiwan

Member
Apr 23, 2019
220
1
18
33
pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y]
just running this
X is HDD
Y is SSD
 

Attachments

  • 微信截图_20190713221633.png
    微信截图_20190713221633.png
    19.4 KB · Views: 8

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
4,617
430
88
pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y]
Yes, but from what I see the only disks free are sdd and sdf. IF you use them for your OSDs then it should work.
 

haiwan

Member
Apr 23, 2019
220
1
18
33
Yes, but from what I see the only disks free are sdd and sdf. IF you use them for your OSDs then it should work.
this have safe?
please help me check.
 

Attachments

  • 微信截图_20190716001953.png
    微信截图_20190716001953.png
    15 KB · Views: 6
  • 微信截图_20190716002056.png
    微信截图_20190716002056.png
    18.8 KB · Views: 5

haiwan

Member
Apr 23, 2019
220
1
18
33

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
4,617
430
88
bluestore_block_db_size = 16106127360
bluestore_block_wal_size = 16106127360
so just in global config add this ?
If you changed the sizes to your needs, yes.
 

haiwan

Member
Apr 23, 2019
220
1
18
33
If you changed the sizes to your needs, yes.
we have 2 osd is HDD 6tb
we add ssd you see image just every hdd use 1G
so i no understand why no more add to big.
if use this default have ok? really we no history about ceph use hdd+ssd
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
4,617
430
88
we add ssd you see image just every hdd use 1G
so i no understand why no more add to big.
The DB expands as long as there is space and the WAL is usually 512MB. After that it spills over to the slower HDD. When everything is on the same disk, the OSD takes care of it.

In general, see our Ceph benchmark paper and my post for link to Ceph docs.
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2018-02.41761/
https://forum.proxmox.com/threads/ceph-raw-usage-grows-by-itself.38395/#post-189842
 

haiwan

Member
Apr 23, 2019
220
1
18
33
The DB expands as long as there is space and the WAL is usually 512MB. After that it spills over to the slower HDD. When everything is on the same disk, the OSD takes care of it.

In general, see our Ceph benchmark paper and my post for link to Ceph docs.
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2018-02.41761/
https://forum.proxmox.com/threads/ceph-raw-usage-grows-by-itself.38395/#post-189842
your mean this 1G have write full ,later start write hdd?
so other 478G have no use
 

haiwan

Member
Apr 23, 2019
220
1
18
33
bluestore_block_db_size = 16106127360
bluestore_block_wal_size = 16106127360
about this i no understand how to config.
just direct vi ceph.conf
wirte in save hvae runnging ok?
sorry .
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
4,617
430
88
/etc/pve/ceph.conf
bluestore_block_db_size = 16106127360
bluestore_block_wal_size = 16106127360
so just in global config add this ?
You answered it yourself already.
 

haiwan

Member
Apr 23, 2019
220
1
18
33
You answered it yourself already.
ok.
we just want know
if add ok, need restart server or ceph?
and defalut 1G need change big. forexample 10G?
we no understand this 1G is running what. this 1G running log? if we want add more, have easy config?
we check you give me other url have no understand.
thanks reply me
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!