@Alwin : I am rebuilding the three nodes again and again using ansible. On each new Deploy I reissue the license as I want to use the Enterprise Repository. After the reissue it takes some time to be able to activate the license in the systems again and it also takes some time until the...
Benchmark script drop for future reference:
Resides in /etc/pve and is started on all nodes using
bash /etc/pve/radosbench.sh
#!/bin/bash
LOGDIR=/root
exec >$LOGDIR/$(basename $0 .sh)-$(date +%F-%H_%M).log
exec 2>$LOGDIR/$(basename $0 .sh)-$(date +%F-%H_%M).err
BLOCKSIZES="4M 64K 8K 4K"
for BS...
Maybe(!) it would be easier to install a standard Debian Buster first and then adjust the apt sources to use ProxMox and then continue from there.
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster
So here the IOps test with 4K blocks
So I believe there is nothing left to change on the configuration that would further improve the performance.
Next is tests from within some VMs.
@Gerhard W. Recher , I tried your sysctl tuning settings yesterday and today, but they performed worse compared to the ones I got initially from https://fasterdata.es.net/host-tuning/linux/ .
My current and final sysctl network tuning:
# https://fasterdata.es.net/host-tuning/linux/100g-tuning/...
@Gerhard W. Recher , I documented that in German....
- Mellanox mft tools installieren und treiber kompilieren lassen
- Alte Version rausholen und dokumentieren (mlxburn -query -d /dev/mst/mt<TAB><TAB>)
- Neue Version einspielen (https://www.mellanox.com/support/firmware/connectx5en - OPN...
So here we have one unencrypted OSD per NVMe....
The result is not as good as with 4 OSDs per NVMe. It seems the OSD is CPU-bound.
Might be a good idea to repeat the test and use taskset to pin relevant processes to a number of CPU Cores to identify that one CPU hungry process.
In other...
So I purged the complete Ceph installation and created it new with 1 OSD per NVMe with encrypted OSD.
The first rados bench run did not show a high write performance - since I rebooted the "cpupower frequency-set -g performance" was missing. I ran that at 16:40.
The write performance is not...
@Alwin , when you use encryption on the Microns, you are talking about the BIOS user password activated encryption, correct?
And the aes-xts engine you refer to is the "Encrypt OSD"-Checkbox/"ceph osd create --encrypted"-CLI switch, correct?
So here is something I do not understand:
Why is there less network traffic when doing sequential reads compared to the Ceph read bandwidth?
Had to take care of the RAM replacement scheduling during the tests. Unfortuntely all eight modules have to be replaced since support is not able to...
@Alwin : How did you run three rados bench seq read clients???
First node starts fine, but the next node gives:
root@proxmox05:~# rados bench 28800 --pool ceph-proxmox-VMs seq -t 16
hints = 1
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0...
So how would you add the three of them up then???
Total time run: 600.009
Total writes made: 369420
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 2462.76
Stddev Bandwidth: 172.924
Max bandwidth (MB/sec): 4932
Min bandwidth (MB/sec)...
I compared it to the one from the night (Post 4 ).
Ran the first rados bench (rados bench 60 --pool ceph-proxmox-VMs write -b 4M -t 16 --no-cleanup)
Total time run: 60.0131
Total writes made: 83597
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec)...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.