Search results

  1. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    @Alwin : I am rebuilding the three nodes again and again using ansible. On each new Deploy I reissue the license as I want to use the Enterprise Repository. After the reissue it takes some time to be able to activate the license in the systems again and it also takes some time until the...
  2. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Benchmark script drop for future reference: Resides in /etc/pve and is started on all nodes using bash /etc/pve/radosbench.sh #!/bin/bash LOGDIR=/root exec >$LOGDIR/$(basename $0 .sh)-$(date +%F-%H_%M).log exec 2>$LOGDIR/$(basename $0 .sh)-$(date +%F-%H_%M).err BLOCKSIZES="4M 64K 8K 4K" for BS...
  3. R

    Proxmox on Mac mini 2011

    Maybe(!) it would be easier to install a standard Debian Buster first and then adjust the apt sources to use ProxMox and then continue from there. https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster
  4. R

    Proxmox on Mac mini 2011

    Life is too short to waste time on slow computers...
  5. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    So here the IOps test with 4K blocks So I believe there is nothing left to change on the configuration that would further improve the performance. Next is tests from within some VMs.
  6. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    @Gerhard W. Recher , I tried your sysctl tuning settings yesterday and today, but they performed worse compared to the ones I got initially from https://fasterdata.es.net/host-tuning/linux/ . My current and final sysctl network tuning: # https://fasterdata.es.net/host-tuning/linux/100g-tuning/...
  7. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    So, here a test with 2 unencrypted OSDs per NVMe This is the best result so far (compared to 4 OSDs per NVMe and 1 OSD per NVMe).
  8. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Looks like you are booting using grub... -> Adjust GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub -> Run "update-grub" -> Reboot
  9. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    74.5 Gbits/sec... - let's see your /proc/cmdline! Check if "amd_iommu=on iommu=pt pcie_aspm=off" is already active!
  10. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    @Gerhard W. Recher , I documented that in German.... - Mellanox mft tools installieren und treiber kompilieren lassen - Alte Version rausholen und dokumentieren (mlxburn -query -d /dev/mst/mt<TAB><TAB>) - Neue Version einspielen (https://www.mellanox.com/support/firmware/connectx5en - OPN...
  11. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    So here we have one unencrypted OSD per NVMe.... The result is not as good as with 4 OSDs per NVMe. It seems the OSD is CPU-bound. Might be a good idea to repeat the test and use taskset to pin relevant processes to a number of CPU Cores to identify that one CPU hungry process. In other...
  12. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    So I purged the complete Ceph installation and created it new with 1 OSD per NVMe with encrypted OSD. The first rados bench run did not show a high write performance - since I rebooted the "cpupower frequency-set -g performance" was missing. I ran that at 16:40. The write performance is not...
  13. R

    Proxmox VE Ceph Benchmark 2020/09 - hyper-converged with NVMe

    @Alwin , when you use encryption on the Microns, you are talking about the BIOS user password activated encryption, correct? And the aes-xts engine you refer to is the "Encrypt OSD"-Checkbox/"ceph osd create --encrypted"-CLI switch, correct?
  14. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    So here is something I do not understand: Why is there less network traffic when doing sequential reads compared to the Ceph read bandwidth? Had to take care of the RAM replacement scheduling during the tests. Unfortuntely all eight modules have to be replaced since support is not able to...
  15. R

    Proxmox VE Ceph Benchmark 2020/09 - hyper-converged with NVMe

    Hi, currently I am running a three node Ceph benchmark here and try to follow your PDF. Feedback and recommendations in my thread are welcome...
  16. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    @Alwin : How did you run three rados bench seq read clients??? First node starts fine, but the next node gives: root@proxmox05:~# rados bench 28800 --pool ceph-proxmox-VMs seq -t 16 hints = 1 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 0 0 0...
  17. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    I would say it is enough write testing for now... Results look stable Next test tomorrow will be with one OSD per NVMe...
  18. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    So how would you add the three of them up then??? Total time run: 600.009 Total writes made: 369420 Write size: 4194304 Object size: 4194304 Bandwidth (MB/sec): 2462.76 Stddev Bandwidth: 172.924 Max bandwidth (MB/sec): 4932 Min bandwidth (MB/sec)...
  19. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    I compared it to the one from the night (Post 4 ). Ran the first rados bench (rados bench 60 --pool ceph-proxmox-VMs write -b 4M -t 16 --no-cleanup) Total time run: 60.0131 Total writes made: 83597 Write size: 4194304 Object size: 4194304 Bandwidth (MB/sec)...
  20. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Applied root@proxmox04:~# cat /etc/sysctl.d/90-network-tuning.conf # https://fasterdata.es.net/host-tuning/linux/100g-tuning/ # allow testing with buffers up to 512MB net.core.rmem_max = 536870912 net.core.wmem_max = 536870912 # increase Linux autotuning TCP buffer limit to 256MB...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!