Hey guys,
After many years of being a vmware user at home I decided it is time to switch to Proxmox for a bit more freedom in configuration and options.
I am having some issues with disk performance, I think my issues are probably down to my limited understanding of the disk setup so I have some questions, before that though here is my basic disk setup:
AMD 3900x Processor
64GB Memory
Drive configuration:
IBM M5015/LSI 9220-8i controllder in IT/HBA Mode.
Proxmox installed to a 250GB Gen4 nvme drive connected to mainboard.
Connected to HBA:
1x ZFS Raid1 pool with the following disks:
The Issues I am experiencing is very high IO Latency (30-80% doing simple tasks) I am sitting around 0-4% on idle, locking up all VM's or outright crashing them on large read/writes and poor performance. This was never an issue under VMWare and I got a average 350MB's read/write speed on my Windows VM's (I know IOPS is a better measurement but a quick read/write test under windows was a quick and dirty solution to test gerneral performance for the work most of the VM's do)
I have performed the following after doing a bit of digging/reading:
What I think I should do (Please note I am unable to purchase any more hardware so trying to make do with what I have in a homelab):
After many years of being a vmware user at home I decided it is time to switch to Proxmox for a bit more freedom in configuration and options.
I am having some issues with disk performance, I think my issues are probably down to my limited understanding of the disk setup so I have some questions, before that though here is my basic disk setup:
AMD 3900x Processor
64GB Memory
Drive configuration:
IBM M5015/LSI 9220-8i controllder in IT/HBA Mode.
Proxmox installed to a 250GB Gen4 nvme drive connected to mainboard.
Connected to HBA:
1x ZFS Raid1 pool with the following disks:
- 2x 1TB WD NAS 7200RPM Hard Drives
- 2x 1TB WD Enterprise 7200RPM Hard Drives
- 3x 4TB WD NAS 7200RPM Hard Drives
The Issues I am experiencing is very high IO Latency (30-80% doing simple tasks) I am sitting around 0-4% on idle, locking up all VM's or outright crashing them on large read/writes and poor performance. This was never an issue under VMWare and I got a average 350MB's read/write speed on my Windows VM's (I know IOPS is a better measurement but a quick read/write test under windows was a quick and dirty solution to test gerneral performance for the work most of the VM's do)
I have performed the following after doing a bit of digging/reading:
- Changed to SCSI disks and controller for all VM's - This helped hugely
- Installed QEMU guest client on all VM's - This helped with the above change
- Enabled/Disabled disk cache - this didnt seem to make any difference
- Enable IO thread for the Windows VM's - This seemed to help sustained read/writes without locking up the host.
What I think I should do (Please note I am unable to purchase any more hardware so trying to make do with what I have in a homelab):
- Change my large ZFS pool to 16kb or larger as this volume contains large media files mostly.
- Change my Smaller ZFS pool to a striped and mirrored setup (RAID10), this is what I was running under VMWare with a battery backed raid controller.
- Maybe add a SSD for cache?
Last edited: