Strange disk performence in Windows guest

mercury131

Active Member
Aug 7, 2018
24
5
43
34
Hello everyone!

I have proxmox home lab, and now I try to choose filesystem for my virtual machines.
I will run Windows VMs and I need better filesystem for that.

I setup Windows VM with this config:
Code:
agent: 1
balloon: 0
bootdisk: virtio0
cores: 4
cpu: host
memory: 4096
name: Test-IOPS
net0: virtio=A6:62:8E:A2:02:B7,bridge=vmbr0
numa: 0
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=206d4f33-dbf2-4a33-a9b9-0622092a7897
sockets: 1
virtio0: Silver:vm-109-disk-1,size=30G
virtio1: Gold:vm-110-disk-2,size=30G

and I test this filesystems:
ext4
xfs
brtfs
zfs
LVM (without any fs, not thin LVM)

I run ATTO Disk Benchmark inside VM on this filesystem and I have very strange results =(
My test hard drive is - WDC WD10EARS-00Y5B1
(5400 RPM 64MB Cache,SATA 3.0Gb/s)

I run tests on virtio1 disk without Windows OS (empty disk formatted NTFS)
Now my test results inside VM :

ext4 (qcow2)
f41ec0d0d9ecc926dd7c1aeaf39bfa7f.png


XFS (qcow2)
f0e8ccd9c6e1a7d5187deac44baf8ce0.png


BTRFS (qcow2)
d50c92947c6db96be7d63637e7ea68f6.png


ZFS (pool, no compression)
c0fd2313028b5e18d4095eff77db6e58.png


LVM (not thin)
44ecafa97ced8b8dd3a064f3362734aa.png


I dont understand how ext4,xfs and btrfs got 8GB/s speed on this hard drive?
Maybe its cache magic?
And why zfs and lvm got poor performance onlike ext4,xfs and btrfs ?

What I do wrong and what is filesystem got best performance?

Thanks in advance!
 
Small file size to test (256Mb), try to 4Gb and check.
Hello Alexander!

I recheck my tests on ext4 and zfs

Results here:

ext4
20414ba3a440a96c3219232cfdcd6553.png


ZFS (pool, no compression,8k block size in proxmox, ashift=12)
9963d096ca3539c9bee7da73173c366d.png


As you can see zfs is slower than ext4 on same hard drive.
I think file size doesnt matter.

I don't understand why zfs slower in this test.
 
ZFS looks slower than anything out of the box in Proxmox.
In my case is unusable. Just moving the virtual disk to a single XFS HDD gots workable results.
The RAIDZ 4 SSD zpool was so slow to put the VM on it, unable to work.
So ZFS is able to make slower what is physically faster.
Otherwise, I read a lot of articles reporting good RAIDZ performance also on legacy hardware (but always based on freebsd).

I believe that Proxmox Wiki should contain a clear troubleshooting pattern to make ZFS performances decent.
This forum has plenty of posts related to poor ZFS performances, but no one of them reports a clear troubleshooting path.
Most of the time the fault is charged to the hardware.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!