Hello.
Brand new here, did a bit of research on the forum before asking but want to double check before doing something bad
I've setup a brand new cluster of 5 servers.
Each server has two E5-2697 v3, 256GB of RAM, eight 2TB Hynix SSD as OSD and a pair of 80GB SSD for boot.
It also has six Intel 10Gbps NIC, with four currently used: two as a LACP bond for ceph and two as a LCAP bond for "the rest". NICs are connected to a pair of Arista switches (40Gbps MLAG).
I've setup the cluster in a very "basic" way: Proxmox 5.4-7, single ceph pool of 40 OSD, cluster/public ceph network on the dedicated LACP bond (in a different VLAN from other switch port),1024 PGs, and three ceph-mons.
I had to setup a couple VMs on the cluster before finshing its tuning (perf-wise) and I can not remove them now.
I ran a rados benchmark yesterday (60s, 16 threads, 4MB):
Before upgrading to Proxmox 6 (and new ceph), I'd like to disable cephx auth and debug and do another benchmark.
From what I found on the forum it seems quite easy to do: update the ceph.conf file then restart ceph cluster and the VM.
Is there a specific (and safest) way to do so?
Like:
Brand new here, did a bit of research on the forum before asking but want to double check before doing something bad
I've setup a brand new cluster of 5 servers.
Each server has two E5-2697 v3, 256GB of RAM, eight 2TB Hynix SSD as OSD and a pair of 80GB SSD for boot.
It also has six Intel 10Gbps NIC, with four currently used: two as a LACP bond for ceph and two as a LCAP bond for "the rest". NICs are connected to a pair of Arista switches (40Gbps MLAG).
I've setup the cluster in a very "basic" way: Proxmox 5.4-7, single ceph pool of 40 OSD, cluster/public ceph network on the dedicated LACP bond (in a different VLAN from other switch port),1024 PGs, and three ceph-mons.
I had to setup a couple VMs on the cluster before finshing its tuning (perf-wise) and I can not remove them now.
I ran a rados benchmark yesterday (60s, 16 threads, 4MB):
Code:
Total time run: 60.055428
Total writes made: 14405
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 959.447
Stddev Bandwidth: 24.6301
Max bandwidth (MB/sec): 1008
Min bandwidth (MB/sec): 892
Average IOPS: 239
Stddev IOPS: 6
Max IOPS: 252
Min IOPS: 223
Average Latency(s): 0.066696
Stddev Latency(s): 0.0290787
Max latency(s): 0.361495
Min latency(s): 0.0267153
Before upgrading to Proxmox 6 (and new ceph), I'd like to disable cephx auth and debug and do another benchmark.
From what I found on the forum it seems quite easy to do: update the ceph.conf file then restart ceph cluster and the VM.
Is there a specific (and safest) way to do so?
Like:
- stop all running VMs
- edit ceph.conf
- restart ceph cluster
- start all VMs