High iops in host, low iops in VM

Lets do it differently, i think you dont care about sync writes or parallelism things etc.

I think you want simply better performance. There is indeed a performance issue with ZVOL's (every VM on your ZFS Storage) is using Zvol.
Except if you defined a storage as "Directory" on the ZFS Pool. (Doing that will speed up the performance)

ZVOL's are simply the slowest possible Storage method for VM's, they don't support most of the ZFS features, like special small blocks and more. Zvols simply dont get many attention in zfs development.

However with 2.2.4, there is a fix, where Zvols will get probably twice as fast, so just wait.

I sadly stumbled across this either. For example the RAW Storage Performance has here ~40GB/s Read speed and 20GB/s Write speed.
With LVM/LVM-thin im getting around 80% of that speed. With ZFS on the Host im hitting some Hard-Limit at 6GB/s Write and 6GB/s Read.
Read and Write gets limited to around 6GB/s.
Inside a ZVOL-VM im getting around 1GB/s xD
It's still fast thanks to 8x Micron 7450 Max, but still almost 20 times slower as it could be.

With 2.2.4 i expect to get around 2-3GB/s inside the VM, but i wouldn't expect to much, so it will probably be 1,5-2GBs.

However, things will get a lot better with updates, there is a massive development going on, but will still take at least a year, till things gets better for NVME's.
Cheers
 
  • Like
Reactions: leesteken
Hi, I'm facing the same problem:

Dell r630 with 2x Micron 7400 3.84 TB in ZFS Mirror (single disk results are not much better), compression off (with on the results are even worse). Latest proxmox, fresh install.

Test command:
fio --filename=/dev/DISK --direct=1 --rw=randread --bs=4k --ioengine=libaio --iodepth=256 --runtime=120 --numjobs=4 --time_based --group_reporting --name=iops-test-job --eta-newline=1 --readonly

Host:
Read: 2701MiB/s
IOPS: 692k

Container (debian 12): ~7% of the host performance
Read: 200MiB/s
IOPS: 51.2k

VM (debian 12, no lvm): ~14% of the host performance
Read: 397MiB/s
IOPS: 102k

The problem is not HW related, I have tested it on different NVMEs, and controllers and the results are similar.

--- edit ---

I have made the same tests with LVM for guests storage instead of ZFS and here are the results (VMs the same, only migrated to the different storage):

Container (debian 12): ~50% of the host performance
Read: 1365MiB/s
IOPS: 350k

VM (debian 12, no lvm): ~20% of the host performance
Read: 554MiB/s
IOPS: 142k
 
Last edited:
LVM is a lot faster, its a very well known factor.
However, there is no fix for this on the Horizon, sadly 2.2.4 made nothing better, zvols are still utter crap.
Litterally everything else is at least twice as fast, its hard to find something that is slower xD

However, no one will be able to help, not the proxmox team and zfs devs either.
The Proxmox team cant do anything about that and the ZFS devs absolutely dont care about zvols.
Its how it is, we have to live with that or find an alternative.
 
  • Like
Reactions: ucholak
I understand the problem with ZFS, and the 50% of the host performance for LXC on LVM is also acceptable for me, but why the KVM performance on LVM is so bad?
 
I understand the problem with ZFS, and the 50% of the host performance for LXC on LVM is also acceptable for me, but why the KVM performance on LVM is so bad?
Because your Storage on your LXC Container is just passed through as "filesystem" and on KVM it's a ZVOL.
The one is a filesystem directly.
The other is a Blockdevice + Filesystem of the VM itself.
 
Last edited:
Kernel 6.8.8-4-pve and Zfs 2.4.4 here. Updated test results, same setup (the only difference is that we added more RAM, previously was 64 GB, now 480 GB):

Host:
Read: 2912MiB/s (previously 2701MiB/s)
IOPS: 745k (previously 692k)

Container (debian 12): ~66% of the host performance (previously ~7%)
Read: 1934MiB/s (previously 200MiB/s)
IOPS: 495k (previously 51.2k)

VM (debian 12, no lvm): ~9% of the host performance (previously ~14%)
Read: 270MiB/s (previously 397MiB/s)
IOPS: 68k (previously 102k)

CT performance boost is really good. I did not expected such difference. But I do not know why VM performance drops by more than 35% from 14% of host performance to 9%. Any ideas?
 
Last edited:
Make sure kvm option is enabled for the VM... My fault was that I was thinking its about enabling nested virtualization inside the VM, but its actually using hardware virtualization on the host for this VM.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!