ceph io very very low

if add ok, need restart server or ceph?
and defalut 1G need change big. forexample 10G?
No you don't need to restart a service. You can change the size to what partition size you would like.

we no understand this 1G is running what. this 1G running log?
Sorry, it tried to explain this before. But you need to read the docs.
http://docs.ceph.com/docs/luminous/rados/configuration/bluestore-config-ref/
https://ceph.com/community/new-luminous-bluestore/

It seems there may be a Chinese translation for the ceph docs, if yo prefer to read it in a Chinese language.
https://github.com/drunkard/ceph-Chinese-doc

if we want add more, have easy config?
You would need to change the partition size for that.
 
No you don't need to restart a service. You can change the size to what partition size you would like.


Sorry, it tried to explain this before. But you need to read the docs.
http://docs.ceph.com/docs/luminous/rados/configuration/bluestore-config-ref/
https://ceph.com/community/new-luminous-bluestore/

It seems there may be a Chinese translation for the ceph docs, if yo prefer to read it in a Chinese language.
https://github.com/drunkard/ceph-Chinese-doc


You would need to change the partition size for that.
thanks
/etc/pve/ceph.conf
bluestore_block_db_size = 16106127360
bluestore_block_wal_size = 16106127360
we add global
bluestore_block_db_size = 150G
bluestore_block_wal_size = 150G
have ok? so 150G how to write
 

Attachments

  • 微信图片_20190716212628.png
    微信图片_20190716212628.png
    25.3 KB · Views: 3
Best in bytes.
and we plan renew config before config .
because this 4 node have running vm.
we just from every node remove 1 osd hdd disk.
so every node before 5 ods. later change 4osd
and we use 1 ssd running log. you think ok and safe?
or we remove 2 disk from every node ? if remove, we 1 by 1 remove or direct remove 2 disk ?
before config is
4 node every node is 5 hdd
later
4 node every node is 3 hdd +ssd
please help me check thanks
 
Last edited:
Create one OSD and see.