Just started an old Win 2008 R2 Enterprise system to have a look...
I am not using SCSI since I want to use FSTRIM - which only works with SATA on these old Windows versions...
Another Idea:
- Use ceph-volume first to create the OSDs
- Then out, stop and purge them
- Then recreate them with pveceph using the already existing LVs...
What I tried to say was:
- Use pvcreate on your NVMe to create a LVM device
- Use lvcreate on your LVM to create 4 logical volumes
- Use these LVs in pveceph plus your WAL device
You could achieve this easier. See https://forum.proxmox.com/threads/recommended-way-of-creating-multiple-osds-per-nvme-disk.52252/post-242117
When you look at the result it creates a LVM device with 4 volumes. You should be able to use the volumes with pveceph.
The ThomasKrenn RA1112 1HE pizza box uses an Asus KRPA-U16 motherboard which runs on an AMI BIOS.
The only settings I changed are:
- Pressed F5 for Optimized Defaults
- Disabled CSM support (we only use UEFI)
We wanted to benchmark to compare results and identify problems in the setup. We did...
So I updated the Zabbix templates used for the Proxmox nodes and switched to Grafana to render additional graphs. We do have single CPU threads graphs and NVMe utilization percentage over all three nodes and items in one graph.
This is a benchmark run with 4 OSDs per NVMe.
Order is
4M...
@Alwin : I am rebuilding the three nodes again and again using ansible. On each new Deploy I reissue the license as I want to use the Enterprise Repository. After the reissue it takes some time to be able to activate the license in the systems again and it also takes some time until the...
Benchmark script drop for future reference:
Resides in /etc/pve and is started on all nodes using
bash /etc/pve/radosbench.sh
#!/bin/bash
LOGDIR=/root
exec >$LOGDIR/$(basename $0 .sh)-$(date +%F-%H_%M).log
exec 2>$LOGDIR/$(basename $0 .sh)-$(date +%F-%H_%M).err
BLOCKSIZES="4M 64K 8K 4K"
for BS...
Maybe(!) it would be easier to install a standard Debian Buster first and then adjust the apt sources to use ProxMox and then continue from there.
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster
So here the IOps test with 4K blocks
So I believe there is nothing left to change on the configuration that would further improve the performance.
Next is tests from within some VMs.
@Gerhard W. Recher , I tried your sysctl tuning settings yesterday and today, but they performed worse compared to the ones I got initially from https://fasterdata.es.net/host-tuning/linux/ .
My current and final sysctl network tuning:
# https://fasterdata.es.net/host-tuning/linux/100g-tuning/...
@Gerhard W. Recher , I documented that in German....
- Mellanox mft tools installieren und treiber kompilieren lassen
- Alte Version rausholen und dokumentieren (mlxburn -query -d /dev/mst/mt<TAB><TAB>)
- Neue Version einspielen (https://www.mellanox.com/support/firmware/connectx5en - OPN...
So here we have one unencrypted OSD per NVMe....
The result is not as good as with 4 OSDs per NVMe. It seems the OSD is CPU-bound.
Might be a good idea to repeat the test and use taskset to pin relevant processes to a number of CPU Cores to identify that one CPU hungry process.
In other...
So I purged the complete Ceph installation and created it new with 1 OSD per NVMe with encrypted OSD.
The first rados bench run did not show a high write performance - since I rebooted the "cpupower frequency-set -g performance" was missing. I ran that at 16:40.
The write performance is not...