Hello there, I'm currently working with resolving some performance issues on a Proxmox installation.
There is a ZFS array on the network which is hosting the VMs, and the behavior is strange.
Currently, the server is hosting 3 Windows 10 VMs with the configuration shown in the attached image. The issue we're currently working to resolve is with regards to the performance of these machines. VM 1 is the primary VM, and two other VMs were created from the template of that VM.
When VM 1 has IO, VM2 and VM3 are brought to a crawl, with task manager showing their disk I/O as completely pinned at 100%. This behavior has varied, sometimes VM 2/3 will be pinned at 100% disk I/O while VM1 is transferring 8 mb/s, other times it will be at 87% while transferring 20 mb/s.
Are there any tuning variables or otherwise that I should take into consideration to resolve this behavior?
Thanks for any insight.
There is a ZFS array on the network which is hosting the VMs, and the behavior is strange.
Currently, the server is hosting 3 Windows 10 VMs with the configuration shown in the attached image. The issue we're currently working to resolve is with regards to the performance of these machines. VM 1 is the primary VM, and two other VMs were created from the template of that VM.
When VM 1 has IO, VM2 and VM3 are brought to a crawl, with task manager showing their disk I/O as completely pinned at 100%. This behavior has varied, sometimes VM 2/3 will be pinned at 100% disk I/O while VM1 is transferring 8 mb/s, other times it will be at 87% while transferring 20 mb/s.
Are there any tuning variables or otherwise that I should take into consideration to resolve this behavior?
Thanks for any insight.