Search results

  1. R

    Proxmox VE Ceph Benchmark 2020/09 - hyper-converged with NVMe

    @Alwin , when you use encryption on the Microns, you are talking about the BIOS user password activated encryption, correct? And the aes-xts engine you refer to is the "Encrypt OSD"-Checkbox/"ceph osd create --encrypted"-CLI switch, correct?
  2. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    So here is something I do not understand: Why is there less network traffic when doing sequential reads compared to the Ceph read bandwidth? Had to take care of the RAM replacement scheduling during the tests. Unfortuntely all eight modules have to be replaced since support is not able to...
  3. R

    Proxmox VE Ceph Benchmark 2020/09 - hyper-converged with NVMe

    Hi, currently I am running a three node Ceph benchmark here and try to follow your PDF. Feedback and recommendations in my thread are welcome...
  4. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    @Alwin : How did you run three rados bench seq read clients??? First node starts fine, but the next node gives: root@proxmox05:~# rados bench 28800 --pool ceph-proxmox-VMs seq -t 16 hints = 1 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 0 0 0...
  5. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    I would say it is enough write testing for now... Results look stable Next test tomorrow will be with one OSD per NVMe...
  6. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    So how would you add the three of them up then??? Total time run: 600.009 Total writes made: 369420 Write size: 4194304 Object size: 4194304 Bandwidth (MB/sec): 2462.76 Stddev Bandwidth: 172.924 Max bandwidth (MB/sec): 4932 Min bandwidth (MB/sec)...
  7. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    I compared it to the one from the night (Post 4 ). Ran the first rados bench (rados bench 60 --pool ceph-proxmox-VMs write -b 4M -t 16 --no-cleanup) Total time run: 60.0131 Total writes made: 83597 Write size: 4194304 Object size: 4194304 Bandwidth (MB/sec)...
  8. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Applied root@proxmox04:~# cat /etc/sysctl.d/90-network-tuning.conf # https://fasterdata.es.net/host-tuning/linux/100g-tuning/ # allow testing with buffers up to 512MB net.core.rmem_max = 536870912 net.core.wmem_max = 536870912 # increase Linux autotuning TCP buffer limit to 256MB...
  9. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Puuuh, just checked the other two. Luckily they show "NUMA node(s): 1" as well...
  10. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Hi Alwin, The attachment to the initial post contains the output of lspcu showing "NUMA node(s): 1" qperf was indeed new to me - will have a look at it.
  11. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Took some time, but found the difference between the hosts: On the first two from previous tests some /etc/sysctl.d/ parameter files had been left behind...
  12. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Result from the over night run proxmox05 seems flaky, proxmox06 seems to uses less CPU than the rest. We need to check for configuration missmatches, it seems the nodes still have differences in their configuration.
  13. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Adjusted the network settings according to AMD Network Tuning Guide . We did not use NUMA adjustments as we only have single socket CPUs. So this is a Zabbix screen over the three nodes. Details over the time ranges: - From 07:00 - 10:25: Performancetest running over the weekend. promox04...
  14. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    First test is to see if the network configuration is in order. 100GBit is 4 network streams combined, so at least 4 processes are required to test for the maximum. root@proxmox04:~# iperf -c 10.33.0.15 -P 4 ------------------------------------------------------------ Client connecting to...
  15. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Hi everbody, we are currently in the process of replacing our VMware ESXi NFS Netapp setup with a ProxMox Ceph configuration. We purchased 8 nodes with the following configuration: - ThomasKrenn 1HE AMD Single-CPU RA1112 - AMD EPYC 7742 (2,25 GHz, 64-Core, 256 MB) - 512 GB RAM - 2x 240GB SATA...
  16. R

    Proxmox VE Ceph Benchmark 2020/09 - hyper-converged with NVMe

    Thanks for this second benchmark! It gives a clear impression what should be achievable with current hardware. I am currently trying to run that exact benchmark setup on a three node cluster and have problems running three rados bench clients simultaneously. Can you adjust the PDF and give...
  17. R

    [SOLVED] Ceph Module Zabbix "failed to send data..."

    Ich hänge mich mal hier dran, weil ich nach dem Upgrade von 5.4 auf 6.1 den gleichen Fehler hatte. Die Konfiguration hat vor dem Upgrade bereits funktioniert, nach dem Upgrade kam "Failed to send data to Zabbix" und der Ceph Status war dann auf Warning. Problem bei uns war die noch fehlenden...
  18. R

    Failing to migrate from vSphere to Proxmox

    Maybe read this: https://forum.proxmox.com/threads/converting-a-windows-vm-from-vmware-nfs-on-netapp-to-proxmox-ceph-with-minimal-downtime.51194/#post-237757
  19. R

    fstrim for Windows Guests and Ceph Storage

    Hi, after reading your posts I reconfigured my Windows 7 and Server 2008r2 systems to use SATA on Ceph RBD storage. ... bootdisk: sata0 ostype: win7 sata0: ceph-proxmox-VMs:vm-106-disk-0,cache=writeback,discard=on,size=30G scsihw: virtio-scsi-pci ... Using...
  20. R

    [SOLVED] Converting a Windows VM from VMware (NFS on NetApp) to ProxMox (Ceph) with minimal downtime

    So the next conversion with another VM just went smoothly with only two reboots. Steps: Create new VM on Proxmox Create IDE raw Harddisk on the NetApp NFS storage, samesize as on the to be converted machine Create one small SCSI qcow2 Disk to be able to properly install VirtIO-SCSI drivers...