Search results

  1. R

    High I/O wait with SSDs

    So, I gave up using ZFS on my boot disk. My SDD performs significantly better with other file systems. Using ext4 with the same settings as above (8K 8 jobs) the fio test results in 432MB/s and using 8k and 16 jobs it is still 117MB/s (repeatedly). Using Btrfs I get around 60MB/s with 8k and 8...
  2. R

    High I/O wait with SSDs

    @6uellerbpanda i attached the complete logs of fio, arcstat and zpool iostat. I also attached the output of dmesg and syslog. Sorry for not attaching it before, I thought it would be OK to summarize it. @tburger I ran you script, and got the following output: working on sda before: 32...
  3. R

    High I/O wait with SSDs

    @tburger: I checked on the pool, and was surprised to see that ashift was set to 0. I tried recreating the pool using zpool create -o ashift=12 tank /dev/sde but there was very little impact in performance: Run status group 0 (all jobs): WRITE: bw=6684KiB/s (6845kB/s), 836KiB/s-837KiB/s...
  4. R

    High I/O wait with SSDs

    First, thank you so much for your time! So, I did the following: First I removed one SSD from the pool and reinstalled Proxmox on this disk using ext4. Booting this new installation, I ran fio --name test-write --ioengine=libaio --iodepth=16 --rw=randwrite --bs=128k --dir ect=0 --size=256m...
  5. R

    High I/O wait with SSDs

    @6uellerbpanda: I checked the BIOS, and I'm running F7 which seems to be the latest version. I ran the tests in PVE itself with all guests stopped. I used the PVE installer to create the ZFS file system (including the mirroring settings) during install and didn't change any of the default...
  6. R

    High I/O wait with SSDs

    @Dunuin : it's the default set by Proxmox, at least I did not remember changing any of it. root@duckburg:~# cat /proc/spl/kstat/zfs/arcstats |grep c_ c_min 4 520479104 c_max 4 8327665664 arc_no_grow 4 0...
  7. R

    High I/O wait with SSDs

    Hello, I recently built my new "server" for my home, and as the title states, experiencing very bad IO performance. The system is configured to have 2 SSDs as a ZFS mirror for the system itself (rpool) and has 4 more spinning disks of which again 2 are configured as a ZFS mirror. The hardware...