I can't seem to find a concrete answer for this issue, I see a lot of people commenting on poor disk speed inside VM but I'm not seeing any real ways to get it sorted.
I have a storage pool setup using NVME disks, using fio to benchmark with the following command
I get over 1100 MiB/s. Perfect. Mount a disk inside my VM whcih resides on this storage a date same test command only gets about 120MiB/s. The software thats running needs fast access to the disk as it's database driven. How can I get performance close to Proxmox speed within the VM? It's on a ZFS Raid10 of 4 NVME, processing power and memory on system are more than capable of supporting the system. Being ZFS its a RAW disk image, I've experimented a little with cache and settled on Write back but they didn't really make any significant difference to be honest. Async IO is enabled as Threads. I can't really see anything else to change?
Any tips on best way to get better performance?
I have a storage pool setup using NVME disks, using fio to benchmark with the following command
fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --size=4k --filename=/Pool/file
I get over 1100 MiB/s. Perfect. Mount a disk inside my VM whcih resides on this storage a date same test command only gets about 120MiB/s. The software thats running needs fast access to the disk as it's database driven. How can I get performance close to Proxmox speed within the VM? It's on a ZFS Raid10 of 4 NVME, processing power and memory on system are more than capable of supporting the system. Being ZFS its a RAW disk image, I've experimented a little with cache and settled on Write back but they didn't really make any significant difference to be honest. Async IO is enabled as Threads. I can't really see anything else to change?
Any tips on best way to get better performance?