1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Install Ceph Server on Proxmox VE (Video tutorial)

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Jan 24, 2017.

  1. martin

    martin Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    573
    Likes Received:
    83
    We just created a new tutorial for installing Ceph Jewel on Proxmox VE.

    The Ceph Server integration is already available since three years and is a widely used component to get a real open source hyper-converged virtualization and storage setup, highly scalable and without limits.

    Video Tutorial
    Install Ceph Server on Proxmox VE

    Documentation
    https://pve.proxmox.com/wiki/Ceph_Server

    Any comment and feedback is very welcome!
    __________________
    Best regards,

    Martin Maurer
    Proxmox VE project leader
     
  2. udo

    udo Well-Known Member
    Proxmox VE Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,003
    Likes Received:
    61
    Nice,
    but the benchmarks benchmarking caching (the readings)... should be noted in the video.

    Udo
     
  3. flexyz

    flexyz Member

    Joined:
    Sep 22, 2016
    Messages:
    42
    Likes Received:
    2
    I see the 3 hosts are interconnected with bonding in broadcast mode, how is that connected on the back?

    Thanks!
     
  4. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,584
    Likes Received:
    112
  5. RobFantini

    RobFantini Active Member
    Proxmox VE Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,126
    Likes Received:
    6
    thank you for the excellent video.

    question - in the stretch kvm, which program is used for benchmarks?
     
  6. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,584
    Likes Received:
    112
    we used the gnome-disk-utility (Disks)

    > apt install gnome-disk-utility
     
  7. blueduckdock

    blueduckdock New Member

    Joined:
    Mar 18, 2016
    Messages:
    11
    Likes Received:
    0
    That is an awesome video. Very easy to follow along, etc. I'll keep this in mind for a project I'm working on. Thanks
     
  8. Dave Wood

    Dave Wood New Member

    Joined:
    Jan 9, 2017
    Messages:
    16
    Likes Received:
    0
    Hi,

    I need your help. I’m getting very poor performance.
    I have 3 nodes Proxmox Cluster setup with HP DL580 g7 Server. Each server has dual port 10 Gbps NIC.
    Each node has 4 x 15K 600 2.5 SAS and 4 X 1 TB 7.2k SATA

    Each node has following Partitions ( I'm use Logical Volume as OSD):

    Node 1
    100GB for Proxmox (7.2 K SATA)
    2.63 TB (7.2K) OSD.3
    1.63 TB (15K) OSD.0

    Node 2
    100GB for Proxmox (7.2 K SATA)
    2.63 TB (7.2K) OSD.4
    1.63 TB (15K) OSD.1

    Node 3
    100GB for Proxmox (7.2 K SATA)
    2.63 TB (7.2K) OSD.3
    1.63 TB (15K) OSD.2

    I create two runsets and two pools


    Runset 0
    osd.0, osd.1, osd.2 and
    Pool ceph-performance is using runset 0


    Runset 1
    osd.3, osd.4, osd.5
    Pool ceph-capacity is using runset 1

    Proxmox and CEPH versions:
    pve-manager/4.4-1/eb2d6f1e (running kernel: 4.4.35-1-pve)
    ceph version 0.94.10

    Performance:
    LXC VM Running on ceph-capacity Storage
    [root@TEST01 /]# dd if=/dev/zero of=here bs=1G count=1 oflag=direct
    1+0 records in
    1+0 records out
    1073741824 bytes (1.1 GB) copied, 15.9393 s, 67.4 MB/s

    LXC VM Running on 7.2 K Storage without CEPH
    [root@TEST02 /]# dd if=/dev/zero of=here bs=1G count=1 oflag=direct
    1+0 records in
    1+0 records out
    1073741824 bytes (1.1 GB) copied, 2.12752 s, 505 MB/s

    root@XXXXXXXX:/# rados -p ceph-capacity bench 10 write --no-cleanup
    Maintaining 16 concurrent writes of 4194304 bytes for up to 10 seconds or 0 objects
    Object prefix: benchmark_data_xxxxxxx_65731
    sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)

    0 0 0 0 0 0 - 0
    1 16 46 30 119.951 120 0.694944 0.389095
    2 16 76 60 119.959 120 0.547648 0.462089
    3 16 80 64 85.3068 16 0.610866 0.468057
    4 16 96 80 79.9765 64 1.83572 0.701601
    5 16 107 91 72.7796 44 1.25032 0.774318
    6 16 122 106 70.6472 60 0.799959 0.81822
    7 16 133 117 66.8386 44 1.51327 0.8709
    8 16 145 129 64.4822 48 1.11328 0.913094
    9 16 158 142 63.0938 52 0.683712 0.917917
    10 16 158 142 56.7846 0 - 0.917917

    Total time run: 10.390136
    Total writes made: 159
    Write size: 4194304
    Bandwidth (MB/sec): 61.2119
    Stddev Bandwidth: 40.3764
    Max bandwidth (MB/sec): 120
    Min bandwidth (MB/sec): 0
    Average IOPS: 15
    Average Latency(s): 1.02432
    Stddev Latency(s): 0.567672
    Max latency(s): 2.57507
    Min latency(s): 0.135008

    Any help will be really appreciated.

    Thank You,
    Dave
     
  9. udo

    udo Well-Known Member
    Proxmox VE Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,003
    Likes Received:
    61
    Hi Dave,
    you should open an new thread for this.(has nothing to do with the video tutorial).

    Udo
     
  10. Dave Wood

    Dave Wood New Member

    Joined:
    Jan 9, 2017
    Messages:
    16
    Likes Received:
    0
  11. Brasso

    Brasso New Member

    Joined:
    Mar 5, 2017
    Messages:
    7
    Likes Received:
    0
    Hi ... Love your video ...
    I upgraded to 4.4 and ran all updates but ended up with 4.4-1 your video has 4.4-2 among other things mine does not have the win 10/2016 option under the OS-tab when creating a new kvm. Any suggestions ? Thank you :)
     
  12. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    1,634
    Likes Received:
    64
    Hi,

    has Udo say to your predecessor.
    Please open a new thread.
     
  13. Richardsanders

    Richardsanders New Member

    Joined:
    Mar 16, 2017
    Messages:
    1
    Likes Received:
    0

Share This Page