Disabling cephx auth

Klug

Active Member
Jul 24, 2019
71
5
28
51
Hello.

Brand new here, did a bit of research on the forum before asking but want to double check before doing something bad :cool:

I've setup a brand new cluster of 5 servers.
Each server has two E5-2697 v3, 256GB of RAM, eight 2TB Hynix SSD as OSD and a pair of 80GB SSD for boot.
It also has six Intel 10Gbps NIC, with four currently used: two as a LACP bond for ceph and two as a LCAP bond for "the rest". NICs are connected to a pair of Arista switches (40Gbps MLAG).

I've setup the cluster in a very "basic" way: Proxmox 5.4-7, single ceph pool of 40 OSD, cluster/public ceph network on the dedicated LACP bond (in a different VLAN from other switch port),1024 PGs, and three ceph-mons.

I had to setup a couple VMs on the cluster before finshing its tuning (perf-wise) and I can not remove them now.

I ran a rados benchmark yesterday (60s, 16 threads, 4MB):
Code:
Total time run:         60.055428
Total writes made:      14405
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     959.447
Stddev Bandwidth:       24.6301
Max bandwidth (MB/sec): 1008
Min bandwidth (MB/sec): 892
Average IOPS:           239
Stddev IOPS:            6
Max IOPS:               252
Min IOPS:               223
Average Latency(s):     0.066696
Stddev Latency(s):      0.0290787
Max latency(s):         0.361495
Min latency(s):         0.0267153

Before upgrading to Proxmox 6 (and new ceph), I'd like to disable cephx auth and debug and do another benchmark.

From what I found on the forum it seems quite easy to do: update the ceph.conf file then restart ceph cluster and the VM.

Is there a specific (and safest) way to do so?
Like:
  • stop all running VMs
  • edit ceph.conf
  • restart ceph cluster
  • start all VMs
 
Is there a specific (and safest) way to do so?
As all VMs/CTs are offline, it should not be a problem.

Before upgrading to Proxmox 6 (and new ceph), I'd like to disable cephx auth and debug and do another benchmark.
I don't believe that there will be much gain, as it already maxes out the 10 GbE and with 5 nodes it is likely that the LACP algo has put them on the same link.
 
Thanks for the answer.

LACP and 10 GbE is another issue, you're right.
Mellanox (and DRMA and Infiniband) is still supported in Proxmox 6?
 
Mellanox (and DRMA and Infiniband) is still supported in Proxmox 6?
The Ceph packages have the RDMA support compiled in but AFAIK Infiniband isn't support by Ceph (EoIB). And I don't know if RDMA with EoIB works.
 
I've just made the same benchmark on a new cluster I've setup.

Three servers with Proxmox 6.3-4.
Each server has two E5-2697 v3, 384GB of RAM, eight 3.8TB Hynix SSD as OSD (same as previous bench but twice the size).
Ceph LAN is meshed 25 Gbps infiniband in broadcast mode (see here: https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server)

Code:
Total time run:         60.0418
Total writes made:      17544
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     1168.79
Stddev Bandwidth:       51.932
Max bandwidth (MB/sec): 1292
Min bandwidth (MB/sec): 1036
Average IOPS:           292
Stddev IOPS:            12.983
Max IOPS:               323
Min IOPS:               259
Average Latency(s):     0.0547544
Stddev Latency(s):      0.0168657
Max latency(s):         0.315808

It's better than 5 servers with 10 Gbps Ethernet switched Ceph LAN, but not outrageously better.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!