Search results

  1. H

    add node have error

    we have resolved. but we new add node and add new osd ceph notice 4 slow requests are blocked > 32 sec. Implicated osds 3,5 1 ops are blocked > 131.072 sec 3 ops are blocked > 32.768 sec osd.3 has blocked requests > 32.768 sec osd.5 has blocked requests > 131.072 sec Reduced data...
  2. H

    add node have error

    root@pve2:~# pveversion -v proxmox-ve: 5.4-1 (running kernel: 4.15.18-12-pve) pve-manager: 5.4-3 (running version: 5.4-3/0a6eaa62) pve-kernel-4.15: 5.3-3 pve-kernel-4.15.18-12-pve: 4.15.18-35 ceph: 12.2.12-pve1 corosync: 2.4.4-pve1 criu: 2.11.1-1~bpo90 glusterfs-client: 3.8.8-1...
  3. H

    add node have error

    sorry, we have renew install non-working node. but old cluster still show this pve5
  4. H

    add node have error

    hi you check image before we add success a Cluster but later we use public ip join in old Cluster . you see have error. we want remove out . but we use pvecm nodes no show this pve5 node. and we found join information have no click.
  5. H

    promox ceph + xenserver

    Hi we now have a trouble. xenserver use freenas raidz have write slow. but we building pve ceph cluster very fast. so we have way use pve ceph storage use 10G share xenserver . use ISCSI or NFS?
  6. H

    about auto config ip

    HI we want know pve have support auto config ip? if support how to config thanks.
  7. H

    about network bond

    thanks, oguz we have a new question if we use bound network 2*10G mode 802.3ad. switch need together config bond? or no config. if sw default support ,we how to test bond have ok?
  8. H

    about network bond

    Hi i remember have viedo have show how to use bond network 2*10G but i forget is which. we have see one by one 5.4 5.3 5.2 5.0 version still no found. if know please give me this viedo url thanks
  9. H

    ceph io very very low

    and we have Ready intel ssd 1TB
  10. H

    ceph io very very low

    thanks Alwin .let me add SSD test let you know.
  11. H

    ceph io very very low

    and you think we before open writeback test have ok? face SATA if i add in 1 ssd per node in the testing server, will it improve the read and write speed?
  12. H

    ceph io very very low

    okay. thakns , Alwin so we just undertand this right ? yeah SATA no SSD write and read good. if we use this plan A 3*node every node 2*osd every osd is 6TB sata plan B 5*node every node 5*osd every osd is 6TB sata plan C 5*node every node 3*osd every osd is 6TB sata you...
  13. H

    ceph io very very low

    root@pve20:~# rados bench -p pveceph-vm 10 write --no-cleanup hints = 1 Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects Object prefix: benchmark_data_pve20_1222678 sec Cur ops started finished avg MB/s cur MB/s last lat(s)...
  14. H

    ceph io very very low

    root@pve20:~# rados bench -p pveceph-vm 60 write -b 4M -t 16 --no-cleanup hints = 1 Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 60 seconds or 0 objects Object prefix: benchmark_data_pve20_1221018 sec Cur ops started finished avg MB/s cur MB/s...
  15. H

    ceph io very very low

    root@pve20:~# rados bench -p rbd 60 write -b 4M -t 16 --no-cleanup error opening pool rbd: (2) No such file or directory
  16. H

    ceph io very very low

    use this? fio --filename=/dev/sdx --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=4k --rwmixread=100 --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=4ktest or rados bench -p rbd 60 write -b 4M -t 16 --no-cleanup ?
  17. H

    ceph io very very low

    and this is open writeback later test . you think ok?
  18. H

    ceph io very very low

    every node all connect 10G switch . no use bond this is SATA enterprise disk.
  19. H

    ceph io very very low

    root@pve20:~# ceph osd crush tree --show-shadow ID CLASS WEIGHT TYPE NAME -2 hdd 38.20520 root default~hdd -4 hdd 10.91577 host pve20~hdd 0 hdd 5.45789 osd.0 1 hdd 5.45789 osd.1 -6 hdd 10.91577 host pve21~hdd 2 hdd 5.45789...
  20. H

    ceph io very very low

    hi,thanks yeah. we try open write back. really write fast. but i don't know why io too slow? we use all new disk .