Is this the right way to benchmark encrypted ZFS pool with fio?

9bitbyte

New Member
Sep 25, 2021
20
0
1
53
Hi All,

As I am new to proxmox, zfs and fio, and before sharing any results, I wanted to confirm that I am doing it right (ie. I am testing the right things and in the right way)!

I created a ZFS mirror pool over 2 HDDs for vms only (the proxmox host is on another ZFS pool of SSDs). I compressed (lz4) and then encrypted (native zfs encryption) the pool. At every step I run the same benchmark file, to get comparable results.

I used a fio file with 24 jobs, testing multiple options like different bs size, random vs sequential, synchronous vs asynchronous. My system has 16GB RAM. I tried to come up with a test that would be toiling enough, reflecting more real life performance, and at the same time eliminating as much as possible caching and RAM interference. (the fio file can be summarized in the below command):

Code:
fio --name=WriteAndRead --size=16g --bs={4KB,16KB,1MB} --rw={read, write, randwrite, randread} --ioengine=libaio --sync={0,1} --iodepth=32 --numjobs=1 --direct=1 --end_fsync=1 --gtod_reduce=1 --time_based --runtime=60

I tested the performance of the unecrypted ZFS pool by executing fio from the mounted pool directory --> /vmPool/fio ....
I tested the performance of encrypted ZFS pool by executing fio from the mounted encrypted filesystem I created in --> /vmPool/encryptedData/fio ....

  1. Was this the right way to benchmark the ZFS pool and then the encrypted filesystem?
  2. Is the fio test I came up with good enough to compare performance?
I very much appreciate your insights and experience.
 
My fio tests showed that using ZFS native encryption will double the write amplification. So don't wonder if you only get half the performance writing to a encrypted dataset/zvol.
And if you want to see real read performance of the disks you need to temorarily set the pools "primarycache" to "metadata" (zfs set primarycache=metadata YourPool/YourTestedDataset) or otherwise you are just benchmarking your RAM because it will be read from ARC.
And you might want to use a higher runtime than 60 seconds so the drives caches can be completely filled up to bring the drives to the limits.
 
Last edited:
  • Like
Reactions: 9bitbyte

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!