Big proxmox installations

Discussion in 'Proxmox VE: Installation and configuration' started by Fathi, May 14, 2018.

  1. spirit

    spirit Well-Known Member
    Proxmox VE Subscriber

    Joined:
    Apr 2, 2010
    Messages:
    3,137
    Likes Received:
    101
    I'm doing small 3 nodes cluster, 18-24 osd by cluster (1,6TB intel s3610 ssd or 3,2TB hstg nvme).
    Fast cpu frequency (10-12 cores, 3ghz intel) by node. replication x3.
    debian stretch/luminous bluestore and jessie/jewel filestore
    2x10GB by ceph node. (ceph public and private network on same link)
    2x10GB on proxmox node. (san + lan on same links, differents vlan)

    proxmox node also have fast cpu (3ghz), to reduce latency.


    I'm also using cephfs and radosgw for sharing datas in my vms, on a dedicated cluster.

    Small clusters because it's more simple for upgrade, and if I don't have enough storage for a specific vm, we simply move disk with proxmox.


    I known 2 peoples who"s have triggered this bug...
    also I don't known if it's have changed, but resync a vm volume/file, needed to scan all blocks on the source file.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    Fathi likes this.
  2. spirit

    spirit Well-Known Member
    Proxmox VE Subscriber

    Joined:
    Apr 2, 2010
    Messages:
    3,137
    Likes Received:
    101
    yes, 100% high avalability. Thanks ceph && proxmox :)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    Fathi likes this.
  3. Alessandro 123

    Joined:
    May 22, 2016
    Messages:
    594
    Likes Received:
    18
    So, you are putting OSD and MONs on the same server. Interesting.
    If i understood properly: 3 ceph servers for both, OSDs and MONs, 2x10GB for redundancy with public and private on the same link, then proxmox is connected to these 3 servers via 10GB link (also used for LAN)

    How many ram on ceph nodes?

    With sharding, only changed shards are synced.
     
    Fathi likes this.
  4. spirit

    spirit Well-Known Member
    Proxmox VE Subscriber

    Joined:
    Apr 2, 2010
    Messages:
    3,137
    Likes Received:
    101
    64GB for my osd nodes (6-8osd).
    128go for my ceph mds for cephfs (I have around 100 000 000 files)



    [QUOTE
    With sharding, only changed shards are synced.[/QUOTE]
    great ! I see that it's since 3.7. That was really wrong before this.

    Others ceph features I'm using is rbd snapshot export|import, for disaster recovery.It's working really fine.
    Also snapshot/rollback. (qcow2 on top of gluster for snapshot can be sometime dangerous)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    Fathi likes this.
  5. rkl

    rkl New Member

    Joined:
    Sep 21, 2014
    Messages:
    18
    Likes Received:
    2
    We have a 5-node Proxmox cluster and in the early days tried out GlusterFS and found it was a buggy-as-hell and slow disaster :-( It doesn't help that Proxmox bundled ancient horrendously buggy versions of GlusterFS for a long time (and weren't interested in updating them when I queried it in this forum). GlusterFS actually managed to create 1 *million* open file handles in a few hours and then refuse to let you do any new operations after that, making it utterly useless for us. Its I/O performance - particularly for writing - seemed to be dismal too.

    We're now using SANs (dual bonded gigabit for speed) and iSCSI to provide the filestore to the Proxmox hosts, which works pretty well and makes live migration quite easy. What I don't like is that recent attempts to upgrade Proxmox on our clusters have generally been a failure. It seems to me that the Proxmox devs don't test cluster upgrades much - quite often you can't live migrate between different Proxmox versions (often between major releases and sometimes even between minor releases!), which is a disaster for a cluster that's supposed to have high uptimes.

    Our most recent upgrade was so bad (first step to just upgrade packages within the current version and reboot before starting the upgrade to the next version actually borked the whole install and dumped me into an initramfs prompt after the reboot!). We ended up ditching Proxmox completely on that node and installing CentOS 7 on the host and using virt-manager to run the VMs we had (that particular setup wasn't using iSCSI).

    I'm now very scared to do any Proxmox "warm" updates - my gut feeling is that the only safe update is to wipe the machine with a fresh Proxmox install from an ISO and reconfigure it from a backup of the config!
     
  6. Alessandro 123

    Joined:
    May 22, 2016
    Messages:
    594
    Likes Received:
    18
    No one tried LizardFS ?
     
  7. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    3,862
    Likes Received:
    230
    Proxmox never bundled GlusterFS packages.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  8. spirit

    spirit Well-Known Member
    Proxmox VE Subscriber

    Joined:
    Apr 2, 2010
    Messages:
    3,137
    Likes Received:
    101
    I never have had problem with live migration, upgraded my clusters since proxmox2->proxmox5, without reinstall.

    of course , you can't migrate from a newer qemu version to an old qemu version.
    But older->newer qemu has never has been a problem.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  9. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,086
    Likes Received:
    470
    @wolfgang already answered this. that gluster botched a couple releases is not our fault.

    we do test cluster upgrades, both for regular upgrades, minor releases, and major releases. all of our infrastructure runs on PVE as well ;)

    while that is very unfortunate, most of our users have a different experience. it might be worth to re-evaluate and try to get to the bottom of your issue, since it is not the expected behaviour.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  10. badji

    badji Member

    Joined:
    Jan 14, 2011
    Messages:
    171
    Likes Received:
    4
    What you say is not true. Sorry for your experience, but your experience is not ours. @fabian and @wolfgang have answered well. @spirit Alexandre has answered well, his company and well placed especially 100% HA. I deployed a lot of Proxmoxve clusters, with OpenFiler, FreeNAS, Ceph, Glusterfs ( Storages) and it always worked well. Same for updates. Currently only my personel POC is for 11 servers, I run on, hpc, paas, big-data, iot ... and it works well. I say like Alexandre : Thank's Proxmox and thank's Ceph ( Glusters, too Sorry @spirit :) ) .
    PS. @wolfgang please, migrate glusterfs-server .deb to version 4.0, I want to test it one more. Merci.
     
  11. Alessandro 123

    Joined:
    May 22, 2016
    Messages:
    594
    Likes Received:
    18
    Based on Gluster's release schedule, you'll better to use vendor repository and not waiting for proxmox. Gluster is updated almost every month or two.
     
    badji likes this.
  12. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    3,862
    Likes Received:
    230
    As Alessandro 123 says you should use the upstream packages.
    Beside the GlusterFS internal bugs(new features what do not correctly work), these packages work well.

    It is much work to build clean and perfect working packages.
    So I don't think we will not in the near future make one.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    badji likes this.
  13. badji

    badji Member

    Joined:
    Jan 14, 2011
    Messages:
    171
    Likes Received:
    4
    I already use it when I create external glusterfs storage clusters. I thought especially when it's done directly with proxmoxve. there is even ovirt-web for gluster (Not Ovirt-web virtualistaion) which allows to autamatise the deployment of clusters on a large scale.
     

    Attached Files:

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice