Search results

  1. M

    ceph very slow perfomance

    Yes in both i was testing something if it will works i will remove the subnet from eth4 and eth5 leave they are on with empty I will test with one port and see what is going.
  2. M

    ceph very slow perfomance

    Here is the right test root@ceph4:~# iperf -c 10.10.1.2 ------------------------------------------------------------ Client connecting to 10.10.1.2, TCP port 5001 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.10.1.4 port 45112...
  3. M

    ceph very slow perfomance

    rados bench -p test 60 write --no-cleanup How looks "ceph osd perf" during write bench? Here is root@ceph3:~# ceph osd perf osd commit_latency(ms) apply_latency(ms) 8 0 0 6 0 0 7 0 0 5...
  4. M

    ceph very slow perfomance

    Now i'm using 1 x Intel P3700 800GB per node For DB Journal and really bad performance root@ceph2:~# rados -p test bench 10 write --no-cleanup hints = 1 Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects Object prefix...
  5. M

    Mellanox bonding issue

    Hello, I have 3 Ceph server trying to bonding via Mellanox 05:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3] Subsystem: Mellanox Technologies MT27500 Family [ConnectX-3] after boding the private IPS there is no connection between nodes. auto ib0 iface ib0...
  6. M

    P3700 vs 900P

    i will use mix then 2 of P3700 and 1 900P and will add more as soon as add more servers.
  7. M

    P3700 vs 900P

    i got 2 PCI Intel P3700 and one 900p yes i will use it for Blustore since i'm using 6tb enterprise drives for OSD and need speed for my storage. with FDR 56Gb/s switch Is that wise to use only one drive for block db ?
  8. M

    P3700 vs 900P

    Hello, i'm confused between intel P3700 vs intel 900p for journal . i'm currently have 1 x 900P and 1 x p3700 however the third server not decided yet. i'm understand that 900p is consumer but it has quit good lifespan 10 Drive Writes per Day (DWPD) and even faster than P3700 which it has 43.8...
  9. M

    Proxmox VE Ceph Benchmark 2018/02

    Is that possible to doing drive recovery at different spirt private network? Instead of killing the performance while in production? I’m testing ceph with intel 900p as journal will update update later with the testing.
  10. M

    Flash Accelerator and SSD journal

    Hello, is that wise to use Flash Accelerator as journal? it give impressive result with 4k block read around 200+ i'm not sure how ceph works but they using 4k block?
  11. M

    ceph very slow perfomance

    How good that can be? Check the attachment. It’s ssd Is PM953 can do any better?
  12. M

    ceph very slow perfomance

    I’m using the following Samsung PM853T BX100 My concern that the my 5tb gives 180+mb/s without ceph!
  13. M

    ceph very slow perfomance

    I do mix between Samsung enterprise and Clucial it only for testing thoght not sure why the latency too high Max latency(s): 100.116
  14. M

    ceph very slow perfomance

    Hello, I just built ceph with 3 node and 3 x 5tb 256 cache hard drive + SSD as Journal Dual port NIC 10 GB port Juniper switch 10GB port bond0 are 2 x 10 GB Intel T copper card Mod Balance-tlb i test the ceph very slow not sure why root@ceph2:~# rados -p test bench 10 write --no-cleanup...
  15. M

    [SOLVED] Ceph OSD issue

    more update i just used dd if=/dev/zero of=/dev/sdd bs=1M count=1024 conv=fdatasync it works now. thanks
  16. M

    [SOLVED] Ceph OSD issue

    I used both way and never fixed. remove the osd # ceph osd out osd. {osd-num} # ceph osd crush remove osd. {osd-num} ceph osd down osd. # ceph auth del osd. {osd-num} # ceph osd rm osd. {osd-num} parted /dev/sdb sgdisk -Z /dev/sdb root@xx:~# dd if=/dev/sdb of=/fil.img ^C^X2233312+0...
  17. M

    [SOLVED] Ceph OSD issue

    Hello do you referring using this one to dd dd if=/dev/null of=/dev/sdxx ?
  18. M

    [SOLVED] Ceph OSD issue

    Hello, I used 3 nodes of Ceph with 3 x 5TB drive and 2 SSD as Journal however once i did creat the OSD they are not visible in the GUI and shows down in putty i checked with google but can't find any solution root@xxx:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1...
  19. M

    mix storage with ceph

    Yes Using as two different storage type even if VPS will be ok right?
  20. M

    mix storage with ceph

    Hello, Is that possible to do mix Local Storage Promox with Ceph Promox in same Cluster? or there is will be issues? Thanks