Proxmox VE Ceph Benchmark 2018/02

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Feb 27, 2018.

  1. udo

    udo Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,807
    Likes Received:
    158
    Hi,
    you read the data, which you write before (from this node) to the pool - if you have read all available data, the benchmark stop.
    Due to faster reading than writing, the job is done in 32 seconds.

    Udo
     
  2. victorhooi

    victorhooi Member

    Joined:
    Apr 3, 2018
    Messages:
    79
    Likes Received:
    2
    Got it.

    Is there any way to figure out what the bottleneck is in the above? (E.g. network, storage drives, or RAM etc) Or if we've hit some hard limitation in Ceph at this scale etc.
     
  3. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,972
    Likes Received:
    168
    You reached your network limits, compare the results from our benchmark paper. To really get the IO/s out of your NVMe drives, you should consider upgrading to 40GbE or even 100GbE (3 nodes, no switch needed).

    Possibly due to the read limitation of your LVM storage, but this is just a shot in the dark.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  4. Alexander Marek

    Alexander Marek New Member
    Proxmox Subscriber

    Joined:
    Apr 6, 2018
    Messages:
    4
    Likes Received:
    0
    Did anybody compare the SM883 with SM863?
    Seems like SM863 is not available on the market anymore!

    I guess performance is approximately the same because it is just a newer modell?

    Thank you in advance

    BR
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice