Search results

  1. elurex

    Ceph blustore over RDMA performance gain

    sometimes my ceph-mon@[server id].service ceph-mgr@[server id].service does not start automatically, I will have to do the following systemctl enable ceph-mon@[server id].service systemctl enable ceph-mgr@[server id].service systemctl start ceph-mon@[server id].service systemctl start...
  2. elurex

    Ceph blustore over RDMA performance gain

    I do not use the mlnx_install script... please go to DEBS folder and manually install all debs dpkg -i *.debs then apt --fix-broken install
  3. elurex

    Ceph blustore over RDMA performance gain

    you should use ConnectX-3_Pro_EN_Firmware: fw-ConnectX3Pro-rel-2_42_5000-MCX314A-BCC_Ax-FlexBoot-3.4.752 MD5SUM: c409cbdf17080773ccde7dd1adb22de1 Release Date: 07-Sep-17
  4. elurex

    Ceph blustore over RDMA performance gain

    try root@pve01:/usr/local/src/mlnx-en-4.3-1.0.1.0-debian9.1-x86_64# ./mlnx_add_kernel_support.sh -m ./ --make-tgz
  5. elurex

    Ceph blustore over RDMA performance gain

    I am not sure your mellanox nic is... but for mellanox EN nic use the following http://content.mellanox.com/ofed/MLNX_EN-4.3-1.0.1.0/mlnx-en-4.3-1.0.1.0-debian9.1-x86_64.tgz for mellanox VPI use the following...
  6. elurex

    ceph performance 4node all NVMe 56GBit Ethernet

    check my other thread... i gave full details
  7. elurex

    Ceph blustore over RDMA performance gain

    Gerhard, I thought you were the first one to do so. Here are my steps Download and install Mellanox Driver for debian 9.1, must use mellanox mlnx_add_kernel_support.sh to compile it for 9.4, go to DEBS and just dpkg -i install everything and run apt --fix-broken install later Follow...
  8. elurex

    ceph performance 4node all NVMe 56GBit Ethernet

    I follow exactly https://community.mellanox.com/docs/DOC-2721 (no additional compiling need, also must use mellanox driver instead of intree driver) except I skipped local gid and also ceph-mds .service
  9. elurex

    Ceph blustore over RDMA performance gain

    I want to share following testing with you 4 PVE Nodes cluster with 3 Ceph Bluestore Node, total of 36 OSD. OSD: st6000nm0034 block.db & block.wal device: Samsung sm961 512GB NIC: Mellanox Connectx3 VPI dual port 40 Gbps Switch: Mellanox sx6036T Network: IPoIB separated public network &...
  10. elurex

    ceph performance 4node all NVMe 56GBit Ethernet

    well... I figure it out as well .
  11. elurex

    ceph performance 4node all NVMe 56GBit Ethernet

    do you have step by step instruction on how to enable ceph over rdma? I also followed your email thread on [ceph-users] RDMA with mellanox connect x3pro on debian stretch and proxmox v5.0 kernel 4.10.17-3 I am also stuck on ms_async_rdma_local_gid= as I have multiple nodes.
  12. elurex

    PVE 5.2 ceph over RDMA

    tks for the update!
  13. elurex

    PVE 5.2 ceph over RDMA

    does pve 5.2 ceph support RDMA? it seems to me that it is in the intree repo already https://github.com/Mellanox/ceph/tree/luminous_v12.2.4-rdma /home/builder/source/ceph-12.2.5/src/msg/async/rdma/RDMAServerSocketImpl.cc
  14. elurex

    ixgbe initialize fails

    mine works perfect Linux x9sri 4.15.17-1-pve #1 SMP PVE 4.15.17-9 (Wed, 9 May 2018 13:31:43 +0200) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright...
  15. elurex

    EDAC sbridge error for PVE 5.1

    my bad... giving out device module ... previously it was giving UP loading
  16. elurex

    EDAC sbridge error for PVE 5.1

    I am still getting error on kernel 4.15-17 https://bugzilla.proxmox.com/show_bug.cgi?id=1718 using pve-kernel-4.15-17 still got sb_edac error using pve-kernel-4.15-17 still error out root@r420b:~# lspci -vnnn -k -s 3f:0e.0 3f:0e.0 System peripheral [0880]: Intel Corporation Xeon E7 v2/Xeon...
  17. elurex

    glusterfs 4.0 released.. lxc on glusterfs?

    glusterfs 4.0 released and it has much better support for container (docker). is it possible now proxmox can run lxc on glusterfs 4.0? lxc rootfs on glusterfs
  18. elurex

    [SOLVED] LXC: More than 10 mount points possible?

    I only use lxc.mount.entry because it allows migration between nodes
  19. elurex

    [SOLVED] LXC: More than 10 mount points possible?

    what about using lxc.mount.entry = command in the config

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!