Hey, this is just an exploratory thread, I'm trying to get a hint of what is happening.
Quick explanation of the situation :
Many of my VM's use a TrueNAS server over NFS (v3) to store their service's data (nextcloud, moodle, e-mails...).
I am troubleshooting slow response times for a service that is using it's data this way : over NFS.
To do the troubleshooting I started by using 'dd', like so :
(/mnt/livedata being the NFS share; I also test locally with /tmp/test1.img)
Here are the summarized results :
On a VM with a qcow2 disk :
dd locally : ~500 MB/s
dd over NFS : ~10 MB/s
On a VM with a raw disk :
dd locally : ~80 MB/s
dd over NFS : ~80 MB/s
These results are very surprising to me... I understand that dd'ing locally should give different performances depending on the virtual hdd type, but over NFS ?
Note that I have 2 VM's with raw disks, and a few more with qcow2 disks... they all give results that match those described here.
Also note that the NFS mounting options seem to have no impact. They are all in async, but other options like 'proto', 'mountproto', 'rsize/wsize' seem to have 0 impact on this test basically.
Should I just go to for 'raw' disks if the poor NFS performance is the tightest bottleneck on my setup ?
Any insight/hint is welcome.
Quick explanation of the situation :
Many of my VM's use a TrueNAS server over NFS (v3) to store their service's data (nextcloud, moodle, e-mails...).
I am troubleshooting slow response times for a service that is using it's data this way : over NFS.
To do the troubleshooting I started by using 'dd', like so :
Code:
dd if=/dev/zero of=/mnt/livedata/test1.img bs=100M count=1 oflag=dsync
Here are the summarized results :
On a VM with a qcow2 disk :
dd locally : ~500 MB/s
dd over NFS : ~10 MB/s
On a VM with a raw disk :
dd locally : ~80 MB/s
dd over NFS : ~80 MB/s
These results are very surprising to me... I understand that dd'ing locally should give different performances depending on the virtual hdd type, but over NFS ?
Note that I have 2 VM's with raw disks, and a few more with qcow2 disks... they all give results that match those described here.
Also note that the NFS mounting options seem to have no impact. They are all in async, but other options like 'proto', 'mountproto', 'rsize/wsize' seem to have 0 impact on this test basically.
Should I just go to for 'raw' disks if the poor NFS performance is the tightest bottleneck on my setup ?
Any insight/hint is welcome.
Last edited: