Proxmox VE Ceph Benchmark 2018/02

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Feb 27, 2018.

  1. tuonoazzurro

    tuonoazzurro Member

    Joined:
    Oct 28, 2017
    Messages:
    54
    Likes Received:
    1
    I'm looking for this solution for like a year but Never understood how to do It so i've made a raid 0 of every single disk.
    Can you Please explain how you did It?
    Thanks
     
  2. Cha0s

    Cha0s New Member

    Joined:
    Feb 9, 2018
    Messages:
    8
    Likes Received:
    0
    To be honest I don't remember how I did it. It's been quite a while since then and I've dismantled the lab to look it up for you.

    Looking into my browser history I see I've visited these two links
    https://unix.stackexchange.com/questions/665/installing-grub-2-on-a-usb-flash-drive
    https://unix.stackexchange.com/questions/28506/how-do-you-install-grub2-on-a-usb-stick

    These should get you started.
    Obviously it won't work just by following blind those answers. It needs some customizing to work with Proxmox.

    Now that I think of it, I may have installed a vanilla Debian and then proxmox on top of it.
    I really don't remember, sorry. I tried a lot during that period of time to finally get it working.
     
  3. Cha0s

    Cha0s New Member

    Joined:
    Feb 9, 2018
    Messages:
    8
    Likes Received:
    0
    I managed to find some notes I kept back then.
    So I did install a vanilla Debian 9 and judging from my vague notes, I probably had the USB stick inserted during installation and used it to mount /boot and then selected it in the final installation steps to install the bootloader onto.

    Then I continued with installing proxmox on debian.

    These final steps maybe outdated. Better check the official installation guides.
     
  4. tuonoazzurro

    tuonoazzurro Member

    Joined:
    Oct 28, 2017
    Messages:
    54
    Likes Received:
    1
    Thanks you very much! I'll try this method.
     
  5. fips

    fips Member

    Joined:
    May 5, 2014
    Messages:
    123
    Likes Received:
    5
    Hi,
    on my P420i I had the same issue, so I installed the OS on the ODD SATA port.
     
  6. tuonoazzurro

    tuonoazzurro Member

    Joined:
    Oct 28, 2017
    Messages:
    54
    Likes Received:
    1
    No redundancy on install disk.

    What i'm looking for is:
    2 small disk in mirror zfs for proxmox
    Other disks for zfs/ceph
     
  7. Runestone

    Runestone New Member

    Joined:
    Oct 12, 2018
    Messages:
    1
    Likes Received:
    0
    Greetings!

    We are looking at building a 4 node HA cluster with Ceph storage on all 4 nodes and had some questions on some items in the FAQ. My idea was to install the OS on pro-sumer SSD's, OSD's on enterprise SSD's and extra storage OSD's for low use servers and backups on spinners. I may not be understanding the context of the FAQ's below, so if someone could help me understand if my idea above is workable, that would be great.

    This answer leads me to believe spinners would be fine if big storage is needed with the caveat that it will be slow.

    This answer leads me to believe that it is not acceptable to use spinners.

    And this answer leads me to believe that nothing less than an enterprise SSD should be used, including consumer & pro-sumer SSD's and spinners.


    Thanks for the help.
     
  8. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,752
    Likes Received:
    151
    @Runestone, all of this has to be seen in the context of VM/CT hosting, where usually high IO/s is needed to run the infrastructure.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  9. afrugone

    afrugone Member

    Joined:
    Nov 26, 2008
    Messages:
    99
    Likes Received:
    0
    Hi, I've configured a 3 server CEPH cluster, using INFINIBAND/IPOIB with a "iperf" of 20GBS, but rados test perform as 1GB, how can I force CEPH traffic to use the INFINIBAND network?
     
  10. udo

    udo Well-Known Member
    Proxmox VE Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,738
    Likes Received:
    150
    Hi,
    use the public network (and mon-ip) inside the infiniband-network (if you have two network seperate the cluster network (traffic between osds))
    Code:
    public_network = 192.168.2.0/24
    cluster_network = 192.168.3.0/24
    
    [mon.0]
    host = pve01
    mon_addr = 192.168.2.11:6789
    
    Udo
     
  11. afrugone

    afrugone Member

    Joined:
    Nov 26, 2008
    Messages:
    99
    Likes Received:
    0
    Many Thanks for your answer, I configured the CEPH from GUI, and the ceph.conf is as show bellow.
    ceph.conf
    [global]
    auth client required = cephx
    auth cluster required = cephx
    auth service required = cephx
    cluster network = 172.27.111.0/24
    fsid = 6a128c72-3400-430e-9240-9b75b0936015
    keyring = /etc/pve/priv/$cluster.$name.keyring
    mon allow pool delete = true
    osd journal size = 5120
    osd pool default min size = 2
    osd pool default size = 3
    public network = 172.27.111.0/24

    [osd]
    keyring = /var/lib/ceph/osd/ceph-$id/keyring
    [mon.STO1001]
    host = STO1001
    mon addr = 172.27.111.141:6789
    [mon.STO1002]
    host = STO1002
    mon addr = 172.27.111.142:6789
    [mon.STO1003]
    host = STO1003
    mon addr = 172.27.111.143:6789​

    The Infiniband is in separate network 10.10.111.0/24 and the public network is at 172.27.111.0/24, so I've to put the following?

    cluster network = 10.10.111.0/24
    public network = 172.27.111.0/24
    host = STO1001
    mon addr = 172.27.111.141:6789
    host = STO1002
    mon addr = 172.27.111.142:6789
    host = STO1003
    mon addr = 172.27.111.143:6789​

    With this modification the test bench is as follows:

    rados bench -p SSDPool 60 write --no-cleanup
    Total time run: 60.470899
    Total writes made: 2858
    Write size: 4194304
    Object size: 4194304
    Bandwidth (MB/sec): 189.05
    Stddev Bandwidth: 24.8311
    Max bandwidth (MB/sec): 244
    Min bandwidth (MB/sec): 144
    Average IOPS: 47
    Stddev IOPS: 6
    Max IOPS: 61
    Min IOPS: 36
    Average Latency(s): 0.338518
    Stddev Latency(s): 0.418556
    Max latency(s): 2.9173
    Min latency(s): 0.0226615​
     
    #91 afrugone, Nov 23, 2018
    Last edited: Nov 23, 2018
  12. udo

    udo Well-Known Member
    Proxmox VE Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,738
    Likes Received:
    150
    Hi,
    you don't have two ceph-networks!
    don't use an cluster network and use 10.10.111.0/24 for the public network. The mons must be also part of this network!

    Udo
     
  13. afrugone

    afrugone Member

    Joined:
    Nov 26, 2008
    Messages:
    99
    Likes Received:
    0
    Sorry but I'm a little confused with the network configuration, my network is as show bellow, Bond0 is a gigabit and bond0 is infiniband with 40GB interfaces and I'm trying that storage communicate trough infiniband (bond0) interfaces

    auto lo
    iface lo inet loopback
    iface eno3 inet manual
    iface enp64s0f1 inet manual
    iface eno1 inet manual
    iface enp136s0f1 inet manual

    auto ib0
    iface ib0 inet manual

    auto ib1
    iface ib1 inet manual

    auto bond1
    iface bond1 inet manual
    slaves eno1 eno3
    bond_miimon 100
    bond_mode active-backup

    auto bond0
    iface bond0 inet static
    address 10.10.111.111
    netmask 255.255.255.0
    slaves ib0 ib1
    bond_miimon 100
    bond_mode active-backup
    pre-up modprobe ib_ipoib
    pre-up echo connected > /sys/class/net/ib0/mode
    pre-up echo connected > /sys/class/net/ib1/mode
    pre-up modprobe bond0
    mtu 65520

    auto vmbr0
    iface vmbr0 inet static
    address 172.27.111.141
    netmask 255.255.252.0
    gateway 172.27.110.252
    bridge_ports bond1
    bridge_stp off
    bridge_fd 0
     
    #93 afrugone, Nov 28, 2018
    Last edited: Nov 28, 2018
  14. udo

    udo Well-Known Member
    Proxmox VE Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,738
    Likes Received:
    150
    Hi,
    you should open an new thread, because this has nothing to do with ceph-benchmarking...

    Udo
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice