Hello,
I'm trying to evaluate the performance differences on storage between ESXi and Proxmox. I'm having some trouble identifying where the performance issues are. Using the same hardware between tests I'm getting drastically different results when comparing the two hypervisors. What can I look into to increase performance on NFS or iSCSI? I'm kind of disappointed with having no ability to create snapshots on iSCSI, and with the current performance on NFS, I don't think it's a viable option.
Hardware:
Dedicated 25GbE cards are installed on the test machine and the SAN. When using iSCSI it is fully multipathed and verified on both ends to be using all paths. When using NFS the 25GbE connections are bonded with LACP using balance-rr.
All flash network appliance with dual controllers with two 25GbE each, all four NICs used for multipath
All tests were ran using the same fio scripts on the same VM that was transferred between Proxmox and ESXi using Veeam.
Here are the test results:
(The VMware NFS was inaccurate, as it was 10GbE at the time, but based on the IOPS, I suspect it is much faster than Proxmox)
Is there any tuning that can be done to increase performance on either NFS or iSCSI?
I've already changed the iSCSI tuning settings to reflect this post with no change in performance:
https://forum.proxmox.com/threads/s...th-dell-equallogic-storage.43018/#post-323461
Any help would be appreciated.
I'm trying to evaluate the performance differences on storage between ESXi and Proxmox. I'm having some trouble identifying where the performance issues are. Using the same hardware between tests I'm getting drastically different results when comparing the two hypervisors. What can I look into to increase performance on NFS or iSCSI? I'm kind of disappointed with having no ability to create snapshots on iSCSI, and with the current performance on NFS, I don't think it's a viable option.
Hardware:
Dedicated 25GbE cards are installed on the test machine and the SAN. When using iSCSI it is fully multipathed and verified on both ends to be using all paths. When using NFS the 25GbE connections are bonded with LACP using balance-rr.
All flash network appliance with dual controllers with two 25GbE each, all four NICs used for multipath
All tests were ran using the same fio scripts on the same VM that was transferred between Proxmox and ESXi using Veeam.
Here are the test results:
(The VMware NFS was inaccurate, as it was 10GbE at the time, but based on the IOPS, I suspect it is much faster than Proxmox)
Setup | Write IOPS | Read IOPS | Write Throughput (MB/s) | Read Throughput (MB/s) |
Proxmox iSCSI LVM | 60000 | 54000 | 2300 | 2000 |
Proxmox iSCSI Direct | 84200 | 85600 | 2300 | 1000 |
Proxmox NFS | 48400 | 17600 | 1870 | 181 |
VMware iSCSI | 54700 | 107000 | 2800 | 5530 |
VMware NFS | 46300 | 53400 | 1160 | 1160 |
Is there any tuning that can be done to increase performance on either NFS or iSCSI?
I've already changed the iSCSI tuning settings to reflect this post with no change in performance:
https://forum.proxmox.com/threads/s...th-dell-equallogic-storage.43018/#post-323461
Any help would be appreciated.