Search results

  1. T

    VM I/O Performance with Ceph Storage

    Sure, here it is: # fio --ioengine=libaio --filename=/dev/nvme0n1 --direct=1 --sync=1 --rw=randwrite --bs=4K --numjobs=1 --iodepth=1 --runtime=600 --time_based --name=fio fio: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 fio-3.25 Starting...
  2. T

    VM I/O Performance with Ceph Storage

    Of course: # fio --ioengine=libaio --filename=/dev/nvme0n1 --direct=1 --sync=1 --rw=randwrite --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name=fio fio: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 fio-3.25 Starting 1...
  3. T

    VM I/O Performance with Ceph Storage

    We are going to in the next days, waiting for arrival... There is no newer one, we already checked that a while ago during investigations That would be sooo nice :S Thx
  4. T

    VM I/O Performance with Ceph Storage

    Yes, we already postet this early: I would be completely with you. But as we assumes sth. like that too, we freed the NVMEs, removed /wiped them from ceph and - at the moment - just added the 2TB ones back to ceph. So IMHO they are now completely empty, but the fio tests are as bad as...
  5. T

    VM I/O Performance with Ceph Storage

    Another question: What about network latencies? We found this: https://www.cubewerk.de/2020/10/23/ceph-performance-guide-2020-for-ssd-nvme/ There one can read: What can affect the overall performance of a ceph-cluster? Slow network (latency!), bad/slow disks, lack of CPU-cycles. [...] ping -c...
  6. T

    VM I/O Performance with Ceph Storage

    Yep, fingers crossed... It's really frustrating, as we are searching for the root cause for some weeks now and didn't really get hands on it till today... and so many contradictory facts / situations... o_O
  7. T

    VM I/O Performance with Ceph Storage

    Nope, as described in the first post it is not just one homeserver but a productive 3 - Node Proxmox / Ceph cluster, running ~30 VMs on it partly used by customers, so reinstalling from scratch would really be "suboptimal" The Cluster is running for a longer time now with this configuration and...
  8. T

    VM I/O Performance with Ceph Storage

    @ITT We have ordered 3 Enterprise NVMes each 1TB in size and now hoping the best.... And yes, we set the MTU to 9000 But - sorry for coming back to this thoughts again - I'm really not sure if something else is going completely wrong in the cluster and this "consumer nvmes are bad" - thing of...
  9. T

    VM I/O Performance with Ceph Storage

    7f:13.2 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder (rev 02) 7f:13.3 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder (rev 02)...
  10. T

    VM I/O Performance with Ceph Storage

    More than 16384 characters, have to split the output: # lspci -vv | grep -P "[0-9a-f]{2}:[0-9a-f]{2}\.[0-9a-f]|LnkSta:" 00:00.0 Host bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMI2 (rev 02) LnkSta: Speed unknown (downgraded), Width x0 (downgraded) 00:01.0 PCI bridge...
  11. T

    VM I/O Performance with Ceph Storage

    So here the missing Infos: Adapter Cards of this kind: https://www.amazon.de/Adapter-K%C3%BChlk%C3%B6rper-6amLifestyle-Adapterkarte-Support/dp/B07RZZ3TJG/ref=sr_1_3?keywords=pcie+nvme+adapter&qid=1673986009&sprefix=pcie%2Caps%2C196&sr=8-3 PCIe Slots they are running in:
  12. T

    VM I/O Performance with Ceph Storage

    Last but not least for completing the whole picture: fio write test of a VM Disk using Ceph Pool with the HDD SAS Disks as Storage: --> IOPS=7, BW=31.9KiB/s, lat=125.31ms IMHO bad values even for SAS HDDs, isn't it? Which again leads us to the assumption, that there is sth. wrong more...
  13. T

    VM I/O Performance with Ceph Storage

    When setting the VM Disk Cache from "No cache" to "Write Back" things even get worse: --> IOPS=74, BW=297KiB/s, lat=13.5ms We thought, "Write Back" would increase write performance... everything very confusing...
  14. T

    VM I/O Performance with Ceph Storage

    In the meantime we benchmarked a VM Disk using the NVMe Ceph Pool inside our Debian Testing VM with fio, and are a little bit surprised: #fio --ioengine=psync --filename=/dev/sdb --size=9G --time_based --name=fio --group_reporting --runtime=600 --direct=1 --sync=1 --rw=write --bs=4K...
  15. T

    VM I/O Performance with Ceph Storage

    Infos arn't missing, given here (https://forum.proxmox.com/threads/vm-i-o-performance-with-ceph-storage.120929/post-526311) -> The disks are those: Crucial P2 CT1000P2SSD8 (1TB) Crucial P2 CT2000P2SSD8 (2TB) Connected via PCIe Adapter Card to PCIe 4x Slots
  16. T

    VM I/O Performance with Ceph Storage

    Here the first benchmarking results: 1.) FIO on NVMEs (all have similar values, no matter if 1TB or 2TB): # fio --ioengine=libaio --filename=/dev/nvme2n1 --direct=1 --sync=1 --rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name=fio fio: (g=0): rw=write, bs=(R) 4096B-4096B...
  17. T

    VM I/O Performance with Ceph Storage

    @stepei: you anticipated our current questions which are going in exact the same direction :D Actually we are going through the Links / PDF from @shanreich's last post and wondering if it is really necessary for ceph to have enterprise SSD/NVMe which costs >1K€ per piece (the PDF is dated...
  18. T

    VM I/O Performance with Ceph Storage

    @Neobin Thx for your hints... in the meanwhile we also came across the fact that it is not the best idea to use consumer NVMes for ceph when it comes to performance, so far, so good... BUT: is it really realistic that the VM I/O performance (and notice: just when writing small files) when...
  19. T

    VM I/O Performance with Ceph Storage

    OK, then we will give it a try over night.. BTW: Just stumbled across this: https://github.com/rook/rook/issues/6964 There the Option "bdev_async_discard" beside "bdev_enable_discard" is mentioned and also set to "true". When being set to false in combination with "bdev_enable_discard" enabled...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!