Proxmox VE Ceph Server released (beta)

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Jan 24, 2014.

  1. dswartz

    dswartz Member
    Proxmox Subscriber

    Joined:
    Dec 13, 2010
    Messages:
    257
    Likes Received:
    4
    I must be missing something basic. I created 3 proxmox virtual machines under vmware (for testing). The cluster looks good, the ceph status tab on any given host says health_ok. Each host has an OSD showing for a total of 3X the space of each disk. It all claims to be okay (and I have created an rdb pool too with no errors). Yet there is no mount point showing (apart from the /var/lib/ceph/osd/ceph-X xfs mountpoints. When I create the rdb storage in the datacenter, it gives no option to set any mount point (so I assumed it is implicit?) Anyway when it's all said and done, if I try to create a VM, when I get to the storage tab, the box is greyed out because proxmox thinks there is nowhere to put the data. The ceph howto on the wiki was pretty scanty, but I'm pretty sure I followed all the instructions. What am I missing? Thanks... I noticed also that if I go to the Status tab under ceph for different nodes, it initially shows as 'Quorum: no' and eventually changes to okay status. Is this just a an artifact of the UI? Or an indication something is wrong?
     
    #61 dswartz, Mar 22, 2014
    Last edited: Mar 22, 2014
  2. dswartz

    dswartz Member
    Proxmox Subscriber

    Joined:
    Dec 13, 2010
    Messages:
    257
    Likes Received:
    4
    I think I might have seen a hint on a different thread. Since I am testing these virtualized, there is no kvm support (I can set the vmware servers to support a nested hypervisor but hadn't done so...) The thread implied that kvm supports the rdb storage directly - so if there is no kvm, would that explain the lack of storage? Another data point: a different thread had suggested looking at the output of 'pvesm status' to diagnose missing storage. When I try that, it hangs...
     
    #62 dswartz, Mar 22, 2014
    Last edited: Mar 22, 2014
  3. dswartz

    dswartz Member
    Proxmox Subscriber

    Joined:
    Dec 13, 2010
    Messages:
    257
    Likes Received:
    4
    Deafening silence. Off to look for other solutions, I guess...
     
  4. udo

    udo Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,835
    Likes Received:
    159
    Hi,
    with ceph storage there isn't an mount point (the rbd is used for kvm directly). cephfs isn't stable yet (and afaik not 100% posix compatible).
    Also not clear for me, what you want test with an virtualized hypervisor and virtualized storage for virtualisation. If that don't work smooth, which finding do you will get?

    For test it's makes much more sense to take three dektop boxes with some sata-hdds.

    Udo
     
  5. dswartz

    dswartz Member
    Proxmox Subscriber

    Joined:
    Dec 13, 2010
    Messages:
    257
    Likes Received:
    4
    I don't have 3 spare boxes. I'm not interested in performance, just learning how this works, and virt should work for that. Regardless of that, I understand now there is no mount point, but this doesn't explain why when I tried creating a VM, no storage is presented to use?
     
  6. axe

    axe New Member

    Joined:
    Sep 2, 2013
    Messages:
    5
    Likes Received:
    0
    You need to add the storage in Datacenter -> Storage -> Add -> RBD. You assign the pool with the name you specified for the ceph pool.
     
  7. dswartz

    dswartz Member
    Proxmox Subscriber

    Joined:
    Dec 13, 2010
    Messages:
    257
    Likes Received:
    4
    Maybe I was not clear. I did all of that. Per the HOWTO. Yet when I create a VM, and am moving through the wizard, when I get to the storage part, there is nothing to choose from.
     
  8. mo_

    mo_ Member

    Joined:
    Oct 27, 2011
    Messages:
    399
    Likes Received:
    3
    One possible reason for that would be that the Ceph cluster does not have quorum. you mentioned it stating HEALTH_OK earlier, but maybe that changed?
     
  9. isodude

    isodude New Member

    Joined:
    Mar 31, 2014
    Messages:
    8
    Likes Received:
    0
    It seems that doing a full clone is quite intense and you loose sparse on images. Any reason for not using rbd flatten instead of qemu-img convert? i.e, first clone the image and then flatten it.

    Cheers,
    Josef
     
  10. dswartz

    dswartz Member
    Proxmox Subscriber

    Joined:
    Dec 13, 2010
    Messages:
    257
    Likes Received:
    4
    Weird, I could have sworn I replied to this yesterday :( Anyway, I tried several times, including bare seconds after seeing ceph reporting "HEALTH_OK". I pretty much gave up at that point...
     
  11. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,520
    Likes Received:
    21
    Hello Udo,
    we'll be testing in a month or so. and have a few 400GB Samsung 840's .

    were you able to solve the i/o issue you had using Samsung 840's ?
     
  12. udo

    udo Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,835
    Likes Received:
    159
    Hi,
    no (but I have the 128GB SDD). I use Intel + Corsair SSDs for the journal (and soon with firefly there is no need for the journal SSDs).

    Udo
     
  13. Florent

    Florent Member

    Joined:
    Apr 3, 2012
    Messages:
    91
    Likes Received:
    2
    Do not confuse *copies* and *nodes*. At least 2 copies is recommended because in case of a node failure, you always get a backup copy of your files in the cluster.

    The number of nodes is different, because to get a *real* cluster environment, you always need 3 nodes minimum. This is because of the functionning of *quorum*. If a node fails, the two remaining nodes can work and take decision together. If you only have 2 nodes, you get a brain-split situation where each node does not know what to do (each one is alone). Search for "quorum" to get more information ;)
     
  14. MimCom

    MimCom Member

    Joined:
    Apr 22, 2011
    Messages:
    202
    Likes Received:
    3
    I would really love to have this, but doesn't it depend on a stable cephfs?

    This could be huge.
     
  15. mo_

    mo_ Member

    Joined:
    Oct 27, 2011
    Messages:
    399
    Likes Received:
    3
    CephFS might be finally called stable in Q3 this year:

    Code:
    <scuttlemonkey> I think the rough estimate that was given (napkin sketch) was sometime in Q3
    <scuttlemonkey> but that's predicated on a lot of other things happening
    <scuttlemonkey> from a stability standpoint we're actually looking pretty good (at last observation)
    even though cephfs is not called stable yet, it can already be considered as such (for tests) since all thats really lacking at the moment is some fsck tool... is what I'm given to understand
     
  16. Florent

    Florent Member

    Joined:
    Apr 3, 2012
    Messages:
    91
    Likes Received:
    2
    Today I installed a new node in my cluster, and "pveceph install" installed "dumpling" version of Ceph (0.67) instead of Emperor (0.72) before, is it normal ?

    Code:
    cat /etc/apt/sources.list.d/ceph.list
    
    deb http://ceph.com/debian-dumpling wheezy main
     
  17. mo_

    mo_ Member

    Joined:
    Oct 27, 2011
    Messages:
    399
    Likes Received:
    3
    I think this was changed. Most likely because dumpling is the latest fully supported release (until firefly is 1 month past release). emperor is only a community release
     
  18. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,459
    Likes Received:
    310
    You can now specify the version using "pveceph install --version emperor"
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  19. felipe

    felipe Member

    Joined:
    Oct 28, 2013
    Messages:
    152
    Likes Received:
    1
    Thank you Proxmox Team!

    The Ceph integration is great. Works perfect out of the box! :)
    With the next update including firewall the proxmox will handle everything in the infrastructure needed :)
    I have installed a test proxmox CEPH cluster on our proxmox cluster. just 3 and then 4 nodes with 2 osds each. performance is ok for just being a virt install on existing proxmox cluster and only 1gbit network. i can still run a win2008 server for testing.
    iops are like 3 standard sata disks and cristal diskmark reads says 40mb for reading (which seems to be the limit of 1 gig network if osds and mons are on the same 1g network..?)
    i used it to test cases like removing and adding osds. turning of 1 node or even shutdown the whole cluster (stopping the vms) to check what happens to the ceph cluster. but i could not kill it so far. at the moment it looks really stable. :)

    soon we will purchase our new hardware so meanwhile i will go on with testing.

    i have some questions about live performance and configuration:
    1) how many standard sata disks do i need to get at least 300MB read/write for a single vm (network will be 10g) with replication of 2 (we will start with 3 nodes))
    2) how can i mange different pools in the proxmox gui (is it possible) 1 for ssds and one for sata? how do i tell a newly created osd which pool to join?
    3) normally if i do replica = 2 ceph tries to store data in 2 different osd's on differnet hosts?
    4) we will start with 10 bigger vms (win2008r2 ts) on the the ceph cluster. normal vms will have peaks (read / write) about 50mb and one will have peaks about 300mb+ (fileserver) we dont need to much iops (5-10mb 4k is ok) for the moment
    5) how to configure the network to have: one network for the (extisting) proxmox cluster; one network for monitors; and one for osd's? in the howto on proxmox wiki it is just 1 network for osds and monitors...
    6) what ratio are you using for osds and ssd journal? ceph talsk about around 1:5
    7) what kind of ssds? (still have no experience with ssds on our servers...) which perform good?
    8) switch config. using 2 10g switches and have 2 network cards for each node with rr. 1 network for osd one for monitors. third network with standars switch for the vms.... has anyone done this allready? or similar?

    best regards
    philipp
     
  20. Florent

    Florent Member

    Joined:
    Apr 3, 2012
    Messages:
    91
    Likes Received:
    2
    Ceph Firefly (0.80) is released. Is PVE working with this version ?
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice