I have always used NFS shares which seems to be the most common sharing protocol but today I decided to compare NFS with SMB shares. All of my use is to host VM QCOW2 files.
The storage target is a UGREEN DXP4800 Plus running TrueNAS SCALE. It has a mirrored pair of spinning drives. I created a dataset and shared it with both SMB and with NFS. I then created a pair of VMs on a Proxmox server, one VM using the SMB share and one using the NFS share. Everything is connected with a 2.5 Gb/s network.
I am benchmarking the disk on both VMs, one at a time using iozone (/opt/iozone/bin/iozone -t1 -i0 -i2 -r1k -s1g /tmp)
According to iozone my SMB share is 3x times faster except for the random write test it is 5x faster.
Is this just a quirk of iozone? If I rsync a file to each VM from a third server, the NFS share is 140% of what I get from the SMB share.
If I use dd to create a file, NFS is faster by almost a third.
One other thing I noticed: Using dd to my SMB share, really thrashing the hard drives. NFS no where as much.
Disk benchmarks seem to be generally junk.
Comments?
The storage target is a UGREEN DXP4800 Plus running TrueNAS SCALE. It has a mirrored pair of spinning drives. I created a dataset and shared it with both SMB and with NFS. I then created a pair of VMs on a Proxmox server, one VM using the SMB share and one using the NFS share. Everything is connected with a 2.5 Gb/s network.
I am benchmarking the disk on both VMs, one at a time using iozone (/opt/iozone/bin/iozone -t1 -i0 -i2 -r1k -s1g /tmp)
According to iozone my SMB share is 3x times faster except for the random write test it is 5x faster.
Is this just a quirk of iozone? If I rsync a file to each VM from a third server, the NFS share is 140% of what I get from the SMB share.
If I use dd to create a file, NFS is faster by almost a third.
One other thing I noticed: Using dd to my SMB share, really thrashing the hard drives. NFS no where as much.
Disk benchmarks seem to be generally junk.
Comments?