Storage performance IOps difference

progoram

New Member
May 11, 2026
4
0
1
Hello,

Environment:

I have 3 Proxmox servers on 3 physical machines. All of them are identical but one has additional hard drive and RAID controller. All are connected via 1Gbps cooper interface.
On this one with additional drive I configured NAS service and connect it do others servers as storage for VM disks. Tested VM never goes on this server.

Tests and results:
I would like check how VM I/O performance depend of storage type. So I am create Linux VM with additional drive and test IOps via vdbench software.
When additional disk was lied on local (on server where VM is running) directory storage or local LVM storage my IOps results was slightly lower than 500 IOps.
But when I moved additional disk to NAS storage created from 3rd server IOps rises to over 2000. And was even higher than for when VM disk was on shared LVM located on enterprise class matrix connected to servers via SAN.
I run the same vdbech script on the same machine during every test and there was no additional VMs on tested Proxmox storages.
Similar tests on Windows give the same results (big IOps difference on NAS storage)

Why there is such huge IOps difference?
Theoretically NAS storage should be slower than local and SAN but my test shows something completely different.
 
  • Like
Reactions: leesteken
but you probably by mistake paste link to my post
No ;-)

You posted twice. And I wanted to give a potential reader/replier a chance to realize this before writing an answer which has possibly already been answered over there...
 
Why there is such huge IOps difference?
I do not know. And I've never used vdbench, so I have no idea what that thing measures. I hesitated to post the obvious:

Generally local storage is always faster than network devices. As long as the same type of HDD/SSD/NVMe devices are used and configured the same way.

A network adds latency. Always.

So... if you use remote storage with a lazy write cache it very well may look like it is much faster than local storage with "sync"-mode enabled...
 
  • Like
Reactions: news
I do not know. And I've never used vdbench, so I have no idea what that thing measures. I hesitated to post the obvious:

Generally local storage is always faster than network devices. As long as the same type of HDD/SSD/NVMe devices are used and configured the same way.

A network adds latency. Always.

So... if you use remote storage with a lazy write cache it very well may look like it is much faster than local storage with "sync"-mode enabled...

You right local storage should be always faster than network devices. But in my case results are as they are. If they would be as I expected I probably won't create this post..

Not only vdbench, but iostat on Windows also shows much higher IOPS when vm disk is on NAS. Of course I'm not ruling out I make something wrong.
On local and my NAS linux server has identical HDD drivers. Only difference is NAS server has additional one exported as NAS storage.

Here is my mounting options:
192.168.xxx.yyy:/mnt/vmdata-share/vmdisks on /mnt/pve/nfs-test type nfs4 (rw,relatime,sync,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,fatal_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.xxx.zzz,local_lock=none,addr=192.168.xxx.yyy)
so it is definitely in sync mode.

and exports on server site:
/mnt/vmdata-share/vmdisks 192.168.xxx.zzz(rw,sync,no_subtree_check)

My /etc/pve/storage.cfg looks like this:
Code:
nfs: nfs-test
        export /mnt/vmdata-share/vmdisks
        path /mnt/pve/nfs-test
        server 192.168.xxx.yyy
        content snippets,rootdir,iso,images
        nodes lab-prox01,lab-prox02
        options vers=4.2,sync
        prune-backups keep-all=1
        snapshot-as-volume-chain 1

Have you see any mounting mistakes or have another idea why IOPS results are so abnormal?