Hello,
Environment:
I have 3 Proxmox servers on 3 physical machines. All of them are identical but one has additional hard drive and RAID controller. All are connected via 1Gbps cooper interface.
On this one with additional drive I configured NAS service and connect it do others servers as storage for VM disks. Tested VM never goes on this server.
Tests and results:
I would like check how VM I/O performance depend of storage type. So I am create Linux VM with additional drive and test IOps via vdbench software.
When additional disk was lied on local (on server where VM is running) directory storage or local LVM storage my IOps results was slightly lower than 500 IOps.
But when I moved additional disk to NAS storage created from 3rd server IOps rises to over 2000. And was even higher than for when VM disk was on shared LVM located on enterprise class matrix connected to servers via SAN.
I run the same vdbech script on the same machine during every test and there was no additional VMs on tested Proxmox storages.
Similar tests on Windows give the same results (big IOps difference on NAS storage)
Why there is such huge IOps difference?
Theoretically NAS storage should be slower than local and SAN but my test shows something completely different.
Environment:
I have 3 Proxmox servers on 3 physical machines. All of them are identical but one has additional hard drive and RAID controller. All are connected via 1Gbps cooper interface.
On this one with additional drive I configured NAS service and connect it do others servers as storage for VM disks. Tested VM never goes on this server.
Tests and results:
I would like check how VM I/O performance depend of storage type. So I am create Linux VM with additional drive and test IOps via vdbench software.
When additional disk was lied on local (on server where VM is running) directory storage or local LVM storage my IOps results was slightly lower than 500 IOps.
But when I moved additional disk to NAS storage created from 3rd server IOps rises to over 2000. And was even higher than for when VM disk was on shared LVM located on enterprise class matrix connected to servers via SAN.
I run the same vdbech script on the same machine during every test and there was no additional VMs on tested Proxmox storages.
Similar tests on Windows give the same results (big IOps difference on NAS storage)
Why there is such huge IOps difference?
Theoretically NAS storage should be slower than local and SAN but my test shows something completely different.