Proxmox newbee question performance LVM-Thin vs ZFS

correct. it wouldnt improve anything, it would make it WORSE- which is the point; without caching a hdd is near useless.
sorry, i did not realise you were talking about the writes for the abnormally HIGH writes, i was having tunnel vision on that low reads, somehow middle of the thread i did not notice it was all 4K reads
 
This are the results with --bs=128k. I've used the commands below

Write : fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=128k --numjobs=1 --size=4g --iodepth=1 --runtime=60 --time_based --end_fsync=1
Read : fio --name=random-read--ioengine=posixaio --rw=randread --bs=128k --numjobs=1 --size=4g --iodepth=1 --runtime=60 --time_based --end_fsync=1


LVM-Thin

WRITE: bw=73.0MiB/s (76.5MB/s), 73.0MiB/s-73.0MiB/s (76.5MB/s-76.5MB/s), io=8096MiB (8489MB), run=110926-110926msec
READ: bw=12.3MiB/s (12.9MB/s), 12.3MiB/s-12.3MiB/s (12.9MB/s-12.9MB/s), io=738MiB (773MB), run=60008-60008msec

ZFS

WRITE: bw=90.3MiB/s (94.7MB/s), 90.3MiB/s-90.3MiB/s (94.7MB/s-94.7MB/s), io=7562MiB (7929MB), run=83761-83761msec
READ: bw=4798MiB/s (5031MB/s), 4798MiB/s-4798MiB/s (5031MB/s-5031MB/s), io=281GiB (302GB), run=60001-60001msec

We all know that disks are slow, but what I don't seem to understand why there is a bit speed difference with LVM-Thin and ZFS (with arc cache reduced using options zfs zfs_arc_min=128 and options zfs zfs_arc_max=1024 in /etc/modprobe.d/zfs.com) on the same disk. Is it normal that ZFS write is 23% faster than LVM-Thin write without cache ? It can't be normal that ZFS read is that much faster than LVM-Thin ??
 
You could try ZFS with "sync=always" to prevent write caching in RAM and "primarycache=none" to completely disable ARC read caching. And do a reboot so already caches data gets dropped.

And you have to write GBs of data to fill up that HDDs internal DRAM cache that could be somwthing like 64M to 256M.
 
Last edited:
I have put both settings in /etc/modprobe.d/zfs.conf, rebooted the device and executed both commands again. ZFS restults are below :

WRITE: bw=67.5MiB/s (70.7MB/s), 67.5MiB/s-67.5MiB/s (70.7MB/s-70.7MB/s), io=7378MiB (7737MB), run=109381-109381msec
READ: bw=10.1MiB/s (10.6MB/s), 10.1MiB/s-10.1MiB/s (10.6MB/s-10.6MB/s), io=605MiB (635MB), run=60003-60003msec

So the read numbers have come down and seem a bit more logical compared to LVM-Thin. The higher write speed (both ZFS and LVM-Thin) could this be the cache on the disk itself ?
 
So the read numbers have come down and seem a bit more logical compared to LVM-Thin. The higher write speed (both ZFS and LVM-Thin) could this be the cache on the disk itself ?
The disk shouldn't cache sync writes.

What might also effect performance is the block level compression of ZFS as long as you are not IOPS limited. Needing to write less data makes the throughput a bit faster. But on the other hand ZFS got way more overhead.
 
Extra info, perhaps its a problem with the hdd itself. So I re-ran all the tests with 2 different harddisks. Results below :

with 250 GB HDD 3.5"

LVM-Thin

WRITE: bw=99.2MiB/s (104MB/s), 99.2MiB/s-99.2MiB/s (104MB/s-104MB/s), io=8192MiB (8590MB), run=82593-82593msec
READ: bw=12.5MiB/s (13.1MB/s), 12.5MiB/s-12.5MiB/s (13.1MB/s-13.1MB/s), io=748MiB (784MB), run=60006-60006msec

ZFS

WRITE: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=9411MiB (9869MB), run=76611-76611msec
READ: bw=4756MiB/s (4987MB/s), 4756MiB/s-4756MiB/s (4987MB/s-4987MB/s), io=279GiB (299GB), run=60001-60001msec

with 500 GB HDD 2.5"

LVM-Thin

WRITE: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=8192MiB (8590MB), run=72298-72298msec
READ: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=840MiB (881MB), run=60013-60013msec

ZFS

WRITE: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=10.2GiB (10.9GB), run=74357-74357msec
READ: bw=4810MiB/s (5043MB/s), 4810MiB/s-4810MiB/s (5043MB/s-5043MB/s), io=282GiB (303GB), run=60001-60001msec

So no problem with the drive itself. Results seem to be the same.
 
This are the results with --bs=128k. I've used the commands below

LVM-Thin WRITE: bw=73.0MiB/s (76.5MB/s), 73.0MiB/s-73.0MiB/s (76.5MB/s-76.5MB/s), io=8096MiB (8489MB), run=110926-110926msec


ZFS WRITE: bw=90.3MiB/s (94.7MB/s), 90.3MiB/s-90.3MiB/s (94.7MB/s-94.7MB/s), io=7562MiB (7929MB), run=83761-83761msec

It can't be normal that ZFS read is that much faster than LVM-Thin ??

Can you nevertheless test also with --rw=write to see sequential numbers (and difference between the two)? It's indeed interesting that you get faster writes on ZFS. The only thing that comes to mind is that the reordering of random writes works in favour of ZFS with its transaction groups. The other option would be that the ext4 on LVM-thin is slow. So one more hypothesis to test would be to check against just plain partition with ext4.
 
The results are in, used command :

fio --name=random-write --ioengine=posixaio --rw=write --bs=128k --numjobs=1 --size=4g --iodepth=1 --runtime=60 --time_based --end_fsync=1
fio --name=random-read--ioengine=posixaio --rw=read --bs=128k --numjobs=1 --size=4g --iodepth=1 --runtime=60 --time_based --end_fsync=1

LVM-Thin w/ext2

WRITE: bw=55.0MiB/s (57.6MB/s), 55.0MiB/s-55.0MiB/s (57.6MB/s-57.6MB/s), io=4096MiB (4295MB), run=74517-74517msec
READ: bw=88.8MiB/s (93.1MB/s), 88.8MiB/s-88.8MiB/s (93.1MB/s-93.1MB/s), io=5328MiB (5587MB), run=60001-60001msec

LVM-Thin w/ext3

WRITE: bw=403MiB/s (422MB/s), 403MiB/s-403MiB/s (422MB/s-422MB/s), io=24.0GiB (25.8GB), run=61002-61002msec
READ: bw=322MiB/s (337MB/s), 322MiB/s-322MiB/s (337MB/s-337MB/s), io=18.9GiB (20.2GB), run=60001-60001msec

LVM-Thin w/ext4

WRITE: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=8192MiB (8590MB), run=107730-107730msec
READ: bw=80.7MiB/s (84.7MB/s), 80.7MiB/s-80.7MiB/s (84.7MB/s-84.7MB/s), io=4845MiB (5081MB), run=60006-60006msec

ZFS
WRITE: bw=64.1MiB/s (67.2MB/s), 64.1MiB/s-64.1MiB/s (67.2MB/s-67.2MB/s), io=7213MiB (7564MB), run=112535-112535msec

Evertyhing with different SATA data cable (but looking at the results, no change with this cable)
 
Last edited:
that is really interesting. somewhere along the line, when tested with ext3 the --end_fsync argument wasnt respected. I might do some testing to verify (or maybe not, its just a curiosity after all and time is a finite quantity ;)

Just as a general statement, testing with a 4g payload is mostly pointless since its an uncommon usecase for virtualization. 4k is more indicative of how your storage subsystem performs for that.
 
Alright, so looking at this, disregarding the ext3 result and only looking at the writes (as this was something that was strange) ... this now looks more as expected in terms of ZFS being slower.

There are still weird questions like why ext2 is so much slower than ext4 but the most interesting one to me is ... these are not my usual ballpark figures for e.g. ext2 sequential write, except this one is on LVM thin.

All these tests when performed, everything else was off? You see the best would be to boot off e.g. live Debian and run these when nothing is used.

Only remaining thing I can think of is that the LVM thin is somehow slowing down the extfs and would be worth testing e.g. thin vs thick vs direct on one filesystem (ext4).

Also I do not think the test matters as much, as long you do the same one if you are comparing the drives across (not gauging performance when used for VMs). Sequential is the most favourable in terms of results you get, it should be also fairly consistent (across tests) with an amount like 4G.

The last thing, you know you can create a ZFS pool where the HDD is your vdev and use portion of that SSD as L2ARC, which should preform reasonably well especially with CTs.
 
@tempacc346235 "All these tests when performed, everything else was off?": Yes, just booted into proxmox and the did the tests. So nothing running extra on the system.

@tempacc346235 "create a ZFS pool where the HDD is your vdev and use portion of that SSD as L2ARC". I need to check how to do this.

Is there a way (remember, just a newbe) how to setup ext3 als default for the Proxmox LVM-Thin storage ?
 
@tempacc346235 "create a ZFS pool where the HDD is your vdev and use portion of that SSD as L2ARC". I need to check how to do this.

Yeah, the command is trivial (I intentionally let you search on your own), but so that you do not blame me later on, you might want to read up some more on it, e.g.: https://klarasystems.com/articles/openzfs-all-about-l2arc/

Is there a way (remember, just a newbe) how to setup ext3 als default for the Proxmox LVM-Thin storage ?

So I am not sure what you are asking here, LVM-thin is a volume manager contruct, so it's doing abstraction layer for partitioning, when your VM gets a virtual disk, it is on LVM-thin as e.g. /dev/pve/vm-100-disk-0, it is a block device, so what filesystem goes on there (within its own partition table) is entirely dependent on what you install onto it.
 
Is there a way (remember, just a newbe) how to setup ext3 als default for the Proxmox LVM-Thin storage ?
What do you mean? LVM-Thin is a block storage so on its own it got no filesystem. When storing a VM on that thin pool it just provides the block device and it is up to the VM to format it with whatever that guestOS supports. When creating a LXC on it PVE will create a new blockdevice without a partition that is formated with an ext filesystem (not sure which version, I think it was ext4).
 
When creating a LXC on it PVE will create a new blockdevice without a partition that is formated with an ext filesystem (not sure which version, I think it was ext4).

I can only imagine it was this he was after then ... but this depends on the host (fs), if it's on e.g. ZFS it not a zvol, but regular dataset if I remember correctly. I would assume this is hard-coded in PVE scripts, not impossible to change, but the next question would be why.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!