Search results

  1. G

    Ceph blustore over RDMA performance gain

    i rebooted whole cluster all 4 nodes ! do you have a running cluseter with RDMA ? which steps do you made to get it running in detail ? I'm totally stuck ...
  2. G

    Ceph blustore over RDMA performance gain

    i Just succeed to have udaddy and rping happy i followed elurex suggestion... but unpack build OFED driver cd to DEBS dpkg --force-overwrite -i *.deb reboot and all rdma pingers are happy but change ceph.conf to RDMA still mon and mgr are unhappy with memlock ! but i set these to infinity...
  3. G

    Ceph blustore over RDMA performance gain

    did not work ... apt-get remove proxmox-ve pve* Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package pve E: Unable to locate package pveam.log E: Couldn't find any package by glob 'pveam.log' E: Couldn't find any package by regex...
  4. G

    Ceph blustore over RDMA performance gain

    I reverted my ceph.conf to tcp mode, cluster is running... but with these new drivers i cant rping or udaddy ... udaddy failed to create event channel: No such device rping -s -C 10 -v rdma_create_event_channel: No such device something got broken i suppose ... any hints ?
  5. G

    Ceph blustore over RDMA performance gain

    i did not succeed ! both mon and mgr did not start Jun 6 21:15:03 pve01 ceph-mgr[5485]: -2> 2018-06-06 21:15:03.683201 7f8b502d66c0 -1 RDMAStack RDMAStack!!! WARNING !!! For RDMA to work properly user memlock (ulimit -l) must be big enough to allow large amount of registered memory. We...
  6. G

    Ceph blustore over RDMA performance gain

    thows many errors/warnings ! root@pve01:/usr/local/src/mlnx-en-4.3-1.0.1.0-debian9.4-x86_64-ext/DEBS# dpkg -i *.deb (Reading database ... 236351 files and directories currently installed.) Preparing to unpack ibverbs-utils_41mlnx1-OFED.4.3.0.1.8.43101_amd64.deb ... Unpacking ibverbs-utils...
  7. G

    Ceph blustore over RDMA performance gain

    Hi just updated firmware ... the build process made a 9.4 ??? cd /usr/local/src wget "http://content.mellanox.com/ofed/MLNX_EN-4.3-1.0.1.0/mlnx-en-4.3-1.0.1.0-debian9.1-x86_64.tgz" tar -xzvf mlnx-en-4.3-1.0.1.0-debian9.1-x86_64.tgz cd mlnx-en-4.3-1.0.1.0-debian9.1-x86_64/...
  8. G

    Will be Ceph Mimic on pve5.x?

    oops 2nd/3rd quarter 2019 ????? Bit Late in my opinion !
  9. G

    Ceph blustore over RDMA performance gain

    After making melanox Driver.. exact steps ? Unpack and Install ? Would realy bei Microsoft top have a working Cook book :)
  10. G

    Ceph blustore over RDMA performance gain

    thanks ... but some warnings ... shall i ignore them or specify '--skip-distro-check' ? also which firmware do you have ? # ibv_devinfo hca_id: mlx4_0 transport: InfiniBand (0) fw_ver: 2.40.7000 node_guid...
  11. G

    Ceph blustore over RDMA performance gain

    connect x3 pro running in 56gBit/s mode ... sx1002 switch ... and appropriate cables... see also my personal profile footer and my original RDMA thread... don't know how to proceed with this driver ... you said invoke "mlnx_add_kernel_support.sh" and compile .... and installl all stuff in...
  12. G

    Ceph blustore over RDMA performance gain

    can you direct me to a download link for topic #1 ? out of box pve version is this one ... strings /lib/modules/4.15.17-2-pve/kernel/drivers/net/ethernet/mellanox/mlx4/mlx4_core.ko|grep -i versio (Installed FW version is %d.%d.%03d) This driver version supports only revisions %d to %d FW...
  13. G

    ceph performance 4node all NVMe 56GBit Ethernet

    I have proxmox 5.2 based on debian stretch, which drivers do you install ? is a apt source avail ? so you think i should give this another try after my gaveup in October ? on a now productive cluster ?
  14. G

    ceph performance 4node all NVMe 56GBit Ethernet

    I gave up in October 2017 .... which steps have you done to succeed ?
  15. G

    Ceph blustore over RDMA performance gain

    amazing ! can you please report detailed configuration steps for PVE to accomplish this ? I failed in getting this to run ! Regards Gerhard
  16. G

    Proxmox VE Ceph Benchmark 2018/02

    my 2 cents ... on 56Gbit/s network configuration see my signature Total time run: 60.022982 Total writes made: 41366 Write size: 4194304 Object size: 4194304 Bandwidth (MB/sec): 2756.68 Stddev Bandwidth: 174.339 Max bandwidth (MB/sec): 2976 Min...
  17. G

    nvmedisk move to other slot

    Hi Fabian shutting down hole cluster, moving disks as intended, and power up should also work like a charme ?
  18. G

    nvmedisk move to other slot

    thx Fabian, Like this one? ID=$1 echo "wait for cluster ok" while ! ceph health | grep HEALTH_OK ; do echo -n "."; sleep 10 ; done echo "ceph osd out $ID" ceph osd out $ID sleep 10 while ! ceph health | grep HEALTH_OK ; do sleep 10 ; done echo "systemctl stop ceph-osd@$ID.service" systemctl...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!