1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Install Ceph Server on Proxmox VE (Video tutorial)

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Jan 24, 2017.

  1. martin

    martin Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    587
    Likes Received:
    135
    We just created a new tutorial for installing Ceph Jewel on Proxmox VE.

    The Ceph Server integration is already available since three years and is a widely used component to get a real open source hyper-converged virtualization and storage setup, highly scalable and without limits.

    Video Tutorial
    Install Ceph Server on Proxmox VE

    Documentation
    https://pve.proxmox.com/wiki/Ceph_Server

    Any comment and feedback is very welcome!
    __________________
    Best regards,

    Martin Maurer
    Proxmox VE project leader
     
  2. udo

    udo Well-Known Member
    Proxmox VE Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,190
    Likes Received:
    77
    Nice,
    but the benchmarks benchmarking caching (the readings)... should be noted in the video.

    Udo
     
  3. flexyz

    flexyz Member
    Proxmox VE Subscriber

    Joined:
    Sep 22, 2016
    Messages:
    60
    Likes Received:
    4
    I see the 3 hosts are interconnected with bonding in broadcast mode, how is that connected on the back?

    Thanks!
     
  4. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,923
    Likes Received:
    183
  5. RobFantini

    RobFantini Active Member
    Proxmox VE Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,186
    Likes Received:
    7
    thank you for the excellent video.

    question - in the stretch kvm, which program is used for benchmarks?
     
  6. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,923
    Likes Received:
    183
    we used the gnome-disk-utility (Disks)

    > apt install gnome-disk-utility
     
  7. blueduckdock

    blueduckdock New Member

    Joined:
    Mar 18, 2016
    Messages:
    11
    Likes Received:
    0
    That is an awesome video. Very easy to follow along, etc. I'll keep this in mind for a project I'm working on. Thanks
     
  8. Dave Wood

    Dave Wood New Member

    Joined:
    Jan 9, 2017
    Messages:
    21
    Likes Received:
    0
    Hi,

    I need your help. I’m getting very poor performance.
    I have 3 nodes Proxmox Cluster setup with HP DL580 g7 Server. Each server has dual port 10 Gbps NIC.
    Each node has 4 x 15K 600 2.5 SAS and 4 X 1 TB 7.2k SATA

    Each node has following Partitions ( I'm use Logical Volume as OSD):

    Node 1
    100GB for Proxmox (7.2 K SATA)
    2.63 TB (7.2K) OSD.3
    1.63 TB (15K) OSD.0

    Node 2
    100GB for Proxmox (7.2 K SATA)
    2.63 TB (7.2K) OSD.4
    1.63 TB (15K) OSD.1

    Node 3
    100GB for Proxmox (7.2 K SATA)
    2.63 TB (7.2K) OSD.3
    1.63 TB (15K) OSD.2

    I create two runsets and two pools


    Runset 0
    osd.0, osd.1, osd.2 and
    Pool ceph-performance is using runset 0


    Runset 1
    osd.3, osd.4, osd.5
    Pool ceph-capacity is using runset 1

    Proxmox and CEPH versions:
    pve-manager/4.4-1/eb2d6f1e (running kernel: 4.4.35-1-pve)
    ceph version 0.94.10

    Performance:
    LXC VM Running on ceph-capacity Storage
    [root@TEST01 /]# dd if=/dev/zero of=here bs=1G count=1 oflag=direct
    1+0 records in
    1+0 records out
    1073741824 bytes (1.1 GB) copied, 15.9393 s, 67.4 MB/s

    LXC VM Running on 7.2 K Storage without CEPH
    [root@TEST02 /]# dd if=/dev/zero of=here bs=1G count=1 oflag=direct
    1+0 records in
    1+0 records out
    1073741824 bytes (1.1 GB) copied, 2.12752 s, 505 MB/s

    root@XXXXXXXX:/# rados -p ceph-capacity bench 10 write --no-cleanup
    Maintaining 16 concurrent writes of 4194304 bytes for up to 10 seconds or 0 objects
    Object prefix: benchmark_data_xxxxxxx_65731
    sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)

    0 0 0 0 0 0 - 0
    1 16 46 30 119.951 120 0.694944 0.389095
    2 16 76 60 119.959 120 0.547648 0.462089
    3 16 80 64 85.3068 16 0.610866 0.468057
    4 16 96 80 79.9765 64 1.83572 0.701601
    5 16 107 91 72.7796 44 1.25032 0.774318
    6 16 122 106 70.6472 60 0.799959 0.81822
    7 16 133 117 66.8386 44 1.51327 0.8709
    8 16 145 129 64.4822 48 1.11328 0.913094
    9 16 158 142 63.0938 52 0.683712 0.917917
    10 16 158 142 56.7846 0 - 0.917917

    Total time run: 10.390136
    Total writes made: 159
    Write size: 4194304
    Bandwidth (MB/sec): 61.2119
    Stddev Bandwidth: 40.3764
    Max bandwidth (MB/sec): 120
    Min bandwidth (MB/sec): 0
    Average IOPS: 15
    Average Latency(s): 1.02432
    Stddev Latency(s): 0.567672
    Max latency(s): 2.57507
    Min latency(s): 0.135008

    Any help will be really appreciated.

    Thank You,
    Dave
     
  9. udo

    udo Well-Known Member
    Proxmox VE Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,190
    Likes Received:
    77
    Hi Dave,
    you should open an new thread for this.(has nothing to do with the video tutorial).

    Udo
     
  10. Dave Wood

    Dave Wood New Member

    Joined:
    Jan 9, 2017
    Messages:
    21
    Likes Received:
    0
  11. Brasso

    Brasso New Member

    Joined:
    Mar 5, 2017
    Messages:
    7
    Likes Received:
    0
    Hi ... Love your video ...
    I upgraded to 4.4 and ran all updates but ended up with 4.4-1 your video has 4.4-2 among other things mine does not have the win 10/2016 option under the OS-tab when creating a new kvm. Any suggestions ? Thank you :)
     
  12. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    2,044
    Likes Received:
    92
    Hi,

    as Udo say to your predecessor.
    Please open a new thread.
     
  13. Richardsanders

    Richardsanders New Member

    Joined:
    Mar 16, 2017
    Messages:
    1
    Likes Received:
    0
  14. gkovacs

    gkovacs Member

    Joined:
    Dec 22, 2008
    Messages:
    476
    Likes Received:
    23
    Any insight on when will erasure coded pools and CephFS become available through the Proxmox web interface?
     
  15. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    2,044
    Likes Received:
    92
    There are no plans about this.

    I'm curious for what you like CephFS on PVE GUI?
     
  16. Ashley

    Ashley Member
    Proxmox VE Subscriber

    Joined:
    Jun 28, 2016
    Messages:
    256
    Likes Received:
    13
    Can easily be made via the CLI and then used and show via the GUI no issues.
     
  17. gkovacs

    gkovacs Member

    Joined:
    Dec 22, 2008
    Messages:
    476
    Likes Received:
    23
    We would love to use CephFS for backup storage. Currently, to backup VMs to Ceph, we have to use OpenMediaVault running as a KVM guest, having a huge RAW disk on a Ceph pool, sharing it over NFS. This has a number of shortcomings:
    - it's a single point of failure, even though Ceph isn't
    - storage capacity is not easy to expand, have to extend partitions and filesystems (while adding to Ceph is trivial)
    - hard NFS mounts can cause cluster communication problems and huge delays for reboots
    - it's performance is not optimal

    If CephFS were to provide a file store over a Ceph pool, it would be great to put backups, templates and ISO images on it, without the inherent problems and complexity of NFS.
     
    #17 gkovacs, Apr 22, 2017
    Last edited: Apr 22, 2017
  18. RobFantini

    RobFantini Active Member
    Proxmox VE Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,186
    Likes Received:
    7
    i think in general you'd agree ceph should not be used to backup vm's that use same ceph storage.
     
  19. gkovacs

    gkovacs Member

    Joined:
    Dec 22, 2008
    Messages:
    476
    Likes Received:
    23
    If you replied to my post: I never said our VMs use the same Ceph storage. In fact they run on local storage, and we already use Ceph as backup storage that can withstand node failure. Anyway, this is not an argument against CephFS as backup storage, which would be very useful either way.
     
    #19 gkovacs, Apr 23, 2017
    Last edited: Apr 23, 2017
  20. Roberto Legname

    Roberto Legname New Member

    Joined:
    Apr 4, 2017
    Messages:
    2
    Likes Received:
    0
    Hi all,
    I have Proxmox Cluster with four nodes, two nodes, when I create OSD they do not see the disks!
    Do you have any idea?

    Best Regards
    Roberto
     

Share This Page