Search results

  1. R

    150mb/sec on a NVMe x3 ceph pool

    So you talk to your Ceph OSDs via a 1GBit/sec network, correct? 1000MBit/sec : 8 = 125MByte/sec
  2. R

    150mb/sec on a NVMe x3 ceph pool

    So a three node Proxmox Cluster. How are the nodes connected to each other and what bandwidth does the ceph cluster and the ceph public network have?
  3. R

    Windows 7/2008R2 driver support

    Just started an old Win 2008 R2 Enterprise system to have a look... I am not using SCSI since I want to use FSTRIM - which only works with SATA on these old Windows versions...
  4. R

    pveceph - unable to get device info for '/dev/nvme2n1p1'?

    Another Idea: - Use ceph-volume first to create the OSDs - Then out, stop and purge them - Then recreate them with pveceph using the already existing LVs...
  5. R

    pveceph - unable to get device info for '/dev/nvme2n1p1'?

    What I tried to say was: - Use pvcreate on your NVMe to create a LVM device - Use lvcreate on your LVM to create 4 logical volumes - Use these LVs in pveceph plus your WAL device
  6. R

    pveceph - unable to get device info for '/dev/nvme2n1p1'?

    You could achieve this easier. See https://forum.proxmox.com/threads/recommended-way-of-creating-multiple-osds-per-nvme-disk.52252/post-242117 When you look at the result it creates a LVM device with 4 volumes. You should be able to use the volumes with pveceph.
  7. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    The ThomasKrenn RA1112 1HE pizza box uses an Asus KRPA-U16 motherboard which runs on an AMI BIOS. The only settings I changed are: - Pressed F5 for Optimized Defaults - Disabled CSM support (we only use UEFI) We wanted to benchmark to compare results and identify problems in the setup. We did...
  8. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    So I updated the Zabbix templates used for the Proxmox nodes and switched to Grafana to render additional graphs. We do have single CPU threads graphs and NVMe utilization percentage over all three nodes and items in one graph. This is a benchmark run with 4 OSDs per NVMe. Order is 4M...
  9. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    @Alwin : I am rebuilding the three nodes again and again using ansible. On each new Deploy I reissue the license as I want to use the Enterprise Repository. After the reissue it takes some time to be able to activate the license in the systems again and it also takes some time until the...
  10. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Benchmark script drop for future reference: Resides in /etc/pve and is started on all nodes using bash /etc/pve/radosbench.sh #!/bin/bash LOGDIR=/root exec >$LOGDIR/$(basename $0 .sh)-$(date +%F-%H_%M).log exec 2>$LOGDIR/$(basename $0 .sh)-$(date +%F-%H_%M).err BLOCKSIZES="4M 64K 8K 4K" for BS...
  11. R

    Proxmox on Mac mini 2011

    Maybe(!) it would be easier to install a standard Debian Buster first and then adjust the apt sources to use ProxMox and then continue from there. https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster
  12. R

    Proxmox on Mac mini 2011

    Life is too short to waste time on slow computers...
  13. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    So here the IOps test with 4K blocks So I believe there is nothing left to change on the configuration that would further improve the performance. Next is tests from within some VMs.
  14. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    @Gerhard W. Recher , I tried your sysctl tuning settings yesterday and today, but they performed worse compared to the ones I got initially from https://fasterdata.es.net/host-tuning/linux/ . My current and final sysctl network tuning: # https://fasterdata.es.net/host-tuning/linux/100g-tuning/...
  15. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    So, here a test with 2 unencrypted OSDs per NVMe This is the best result so far (compared to 4 OSDs per NVMe and 1 OSD per NVMe).
  16. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Looks like you are booting using grub... -> Adjust GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub -> Run "update-grub" -> Reboot
  17. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    74.5 Gbits/sec... - let's see your /proc/cmdline! Check if "amd_iommu=on iommu=pt pcie_aspm=off" is already active!
  18. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    @Gerhard W. Recher , I documented that in German.... - Mellanox mft tools installieren und treiber kompilieren lassen - Alte Version rausholen und dokumentieren (mlxburn -query -d /dev/mst/mt<TAB><TAB>) - Neue Version einspielen (https://www.mellanox.com/support/firmware/connectx5en - OPN...
  19. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    So here we have one unencrypted OSD per NVMe.... The result is not as good as with 4 OSDs per NVMe. It seems the OSD is CPU-bound. Might be a good idea to repeat the test and use taskset to pin relevant processes to a number of CPU Cores to identify that one CPU hungry process. In other...
  20. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    So I purged the complete Ceph installation and created it new with 1 OSD per NVMe with encrypted OSD. The first rados bench run did not show a high write performance - since I rebooted the "cpupower frequency-set -g performance" was missing. I ran that at 16:40. The write performance is not...