8.4 / High IO / ZFS or kernel issues?

I didn't try with compression disabled because I rebuilt the volume to LVM-Thin and had the same IO issues. The main nvme also uses the default file system rather than ZFS.
You can insert another SSD/NVMe and create a pool with no compression enabled and transfer the VM to that pool. Or transfer only the VM's virtual hard drive to its pool (uncompressed).

1745795958857.png
 
I asked Chat GPT and it created a script for me to disable it on all disks. Does this not achieve the same thing as backup and restore?
#!/bin/bash

POOL="zfs-local"

echo "Disabling compression on all VM disks under $POOL..."

# List all ZFS volumes in the pool
zfs list -H -o name | grep "^${POOL}/" | while read -r dataset; do
echo "Disabling compression on $dataset..."
zfs set compression=off "$dataset"
done

echo "All done!"


heres part of the shell output when i ran it

Disabling compression on zfs-local/vm-900-disk-0...
Disabling compression on zfs-local/vm-900-disk-1...
Disabling compression on zfs-local/vm-999-disk-0...
Disabling compression on zfs-local/vm-999-disk-1...
Disabling compression on zfs-local/vm-999-disk-2...
All done!

I also disabled it on the pool for new disks. Overall the IO overhead has dropped but as soon as I try anything on a server it spikes again. I'll migrate a disk to NFS store and back again to ZFS and see if that helps.
 
Last edited:
that definitely seems to have disabled it on all drives. but with ZFS the changes are not applied to files until they are rewritten to disk, so it basically just told it not to compress future files, it doesnt change the current file state.
 
  • Like
Reactions: news and Macross
Isso definitivamente parece tê-lo desativado em todas as unidades. mas com o ZFS as alterações não são aplicadas aos arquivos até que sejam reescritos no disco, então basicamente apenas disse para não compactar arquivos futuros, não altera o estado atual do arquivo.
Exactly!
that definitely seems to have disabled it on all drives. but with ZFS the changes are not applied to files until they are rewritten to disk, so it basically just told it not to compress future files, it doesnt change the current file state.
Exactly!
 
ok, thanks.. I wanted to understand if there was a way to disable compression without destroying the pool, I was thinking of using RAID with the ARECA controller, as I had before with VMW, instead of using ZFS, it seems to me that before with VMW the performances were better with the same HDDs
 
There is a tradeoff to pve operating storage design model each which is own pros and cons to performance, features and possibilities.
So it depends what one will use or has to use and important I find how anybody is comfortable with in case of any problems which might come up.
Good luck anyway and have fun with pve etc :)
 
The usage of cpu cycles and needed blocks of the pool to store an initiated write (here by running a vm).
 
so is it better with or without compression? I tried running a vm without compression, but the IO delay remains, it increases a lot when I go to write, while when I read it is low
Move the vm to a different disk, then if need be back to the original disk. You will see zero changes until data is rewritten to disk. ZFS also in my opinion unless properly tuned with many disks, is better for backups, etc, and not VMs (again unless you have many disks setup properly)
what exactly changes if I use zfs with compression, and without compression?
It shrinks files on disk which causes extra cpu use and sometimes extra IO wait so they take up less space, it will often also make reads faster but writes can be slower.
 
Maybe your SSD entering powersaving mode and reducing performance ?


Code:
https://nvmexpress.org/resource/technology-power-features/

$> nvme list

# Check status #
$> nvme get-feature /dev/nvme0n1 -f 2 -H

# Limit PowerMangement #
$> nvme set-feature /dev/nvme0n1 -f 2 -v 2
or
# Disable PowerMangement #
$> nvme set-feature /dev/nvme0n1 -f 2 -v 0
 
Last edited: