Search results

  1. H

    join in cluster failed

    hi i think you check your switch. last time we happen this is my switch is cisco. no support
  2. H

    remove plan

    Hi we plan from promox clusterA vm remove to promox clusterB. how to remove have plan?
  3. H

    change server question.

    sorry. we forget write we plan cpu and memory together remove to new server
  4. H

    change server question.

    Hi .we want use new server replace old server. plan is disk no change , just change server. 1 shutdown all node 2 remove disk from old server to new server. this plan have ok?
  5. H

    we config mtu no active

    check image we just connect 10G cisco DAC line connect 10G switch. no use bond. so have support?
  6. H

    about fast ceph config

    check we have read more about let ceph io fast please help me check this config is ok? Disable cephx bluestore compression = lz4
  7. H

    how to upgrade new version?

    Reading package lists... Done Building dependency tree Reading state information... Done All packages are up to date.
  8. H

    how to upgrade new version?

    we no upgrade 6.0 we running apt dist-upgrade up to 5.4-11
  9. H

    how to upgrade new version?

    Hi we have subscribe. how to from 5.3-9 upgrade new version? if upgrade ok need restart server?
  10. H

    ceph io very very low

    oh, i have add ssd io no add still slow
  11. H

    ceph io very very low

    help me check right? thanks Alwin
  12. H

    ceph io very very low

    firest remove hdd 1 or 2 hdd ? or no remove hdd . add ssd check io
  13. H

    ceph io very very low

    and we plan renew config before config . because this 4 node have running vm. we just from every node remove 1 osd hdd disk. so every node before 5 ods. later change 4osd and we use 1 ssd running log. you think ok and safe? or we remove 2 disk from every node ? if remove, we 1 by...
  14. H

    ceph io very very low

    so 150G = 161061273600 right? we still worry vi /ceph.conf happ trouble
  15. H

    ceph io very very low

    thanks /etc/pve/ceph.conf bluestore_block_db_size = 16106127360 bluestore_block_wal_size = 16106127360 we add global bluestore_block_db_size = 150G bluestore_block_wal_size = 150G have ok? so 150G how to write
  16. H

    ceph io very very low

    ok. we just want know if add ok, need restart server or ceph? and defalut 1G need change big. forexample 10G? we no understand this 1G is running what. this 1G running log? if we want add more, have easy config? we check you give me other url have no understand. thanks reply me
  17. H

    ceph io very very low

    bluestore_block_db_size = 16106127360 bluestore_block_wal_size = 16106127360 about this i no understand how to config. just direct vi ceph.conf wirte in save hvae runnging ok? sorry .
  18. H

    ceph io very very low

    your mean this 1G have write full ,later start write hdd? so other 478G have no use
  19. H

    ceph io very very low

    we have 2 osd is HDD 6tb we add ssd you see image just every hdd use 1G so i no understand why no more add to big. if use this default have ok? really we no history about ceph use hdd+ssd