Search results

  1. S

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    Is raidz just really slow? Should I give up and use mirrors? Various places quote that a raidz vdev goes at the speed of the slowest device, but the underlying IOPS aren't maxing out to what the SSDs can do...
  2. S

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    Thanks @Dunuin appreciate your attention. I'm not using a zvol (I think?) Also, why do I not see a smilar slow down when benchmarking on HDDs in terms or underlying IOPs?
  3. S

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    I don't believe that's true, the ZFS recordsize parameter is just the maximum size, check zpool iostat -r [pool] to see the distribution of various block sizes. If you mean volblocksize (is the minimum?) mine doesn't show a value, is that weird? But it I guess it's 4k # zfs get volblocksize...
  4. S

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    Thanks @Dunuin - how do you measure write amplification directly? Also, why would ZFS write amplification effect read IOPS?
  5. S

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    HI @aaron , Thanks for your reply, primarycache=none on the zpool in question. I will be using containers, so I guess the filesystem dataset is relevant? The system is idle, the benchmark duration appears not to have an impact. My concern is the per device IOPS, it appears to make sense for...
  6. S

    Proxmox VE ZFS Benchmark with NVMe

    Hi all, I wonder if I could hijack with related SSD performance benchmarking - are my results within expectations? I have 2 identenical PVE 7.0-11, the only differnce being the HDD / SSD arrangement. The SSD's are enterprise SATA3 Intel S4520, the HDDs are 7.2K SAS. Full post here...
  7. S

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    OK, so a potentially 'farier' test of the SSDs: 64 x random read/write: fio --ioengine=libaio --filename=/rpool/fio/testx --size=4G --time_based --name=fio --group_reporting --runtime=10 --direct=1 --sync=1 --iodepth=1 --rw=randrw --bs=4K --numjobs=64 Also, I'm moved to zpool iostat for IO...
  8. S

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    OK, that makes sense, thank you Dunuin. However, I'm sure something is wrong. Compare the following: Running 4 tests, 2 on each machine, the only difference being the target is either the raw device or a file on the non-cached ZFS pool, /dev/sdg or /rpool/fio/testx. fio --ioengine=sync...
  9. S

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    Hi Dunion, thanks for your reply! The FIO command is in the OP. I'd expect the IOPS of one SSD to be in the order of 10k's for sequential read/writes, as it is on my laptop with an SSD, but on there it's nowhere near that...
  10. S

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    Hi there! I have two PVE 7.0 on ZFS, one with 12 x 4TB 7.2K SAS HDD in ZFS RAID 10, the other with 4 x 4TB SATA SSD in Z1 and they're coming out with near identical IO performance, which is suspicious! From benchmarking with FIO with caches and buffers disabled, on sequential read / writes, the...
  11. S

    DiskIO in CT missing

    Adding my voice - Disk IO show's nothing on my containers!

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!