Hello all,
We have a following Proxmox setup:
One storage server, HP DL380P G8, CPU Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz, 32GB RAM
Storage Pool assembled on 8xSSD, Samsung SSD 870 QVO 1TB, which are put into the four mirrors with 2 drives each, and then mirrors are setup into the stripe, using ZFS
OS: OpenMediaVault (Debian based 6.0.28-3 (Shaitan)
Kernel: Linux 5.16.0-0.bpo.4-amd64
NFS: nfs-kernel-server/stable,now 1:1.3.4-6 amd64 [installed] - Ver 3.
Local storage speed on this server is measured via dd is at 2GB/Second
This storage is connected to three Proxmox virtualization nodes, via X520-DA1 (1x SFP+) 10GBit network cards through Mikrotik CRS317-1G-16S+, over NFS. Each virtualization node has the same network card as a storage server
Proxmox versions tested: 6.4-4, 7.2-4
pve-manager/6.4-4/337d6701 (running kernel: 5.4.106-1-pve)
Network connection speed measured via iperf from the virtualization machine to the storage at 7-10GBit/second, to ensure network condition is OK
Issue: Slow write performance when measured via dd over the NFS mount from the storage server connected to the virtualization node.
We expected maximum 1-1,2 GB/Second speeds over the dd while measuring on the node, however, we see numbers around 250MB-350MB/Second, which is four times slower than expected.
We tried different storage OS, such as plain Ubuntu, TrueNAS, and OpenMediaVault, and these all give us the same slow results.
We have also reset/changed the Mikrotik router, which is in the middle, however, this did not solve the issue.
What we observe though, is a delay between issuing dd command and network activity over the switch. It looks like it tries to pre-allocate the blocks needed for the write, bringing slow overall results
At this point we are lost and trying to get any help or advise from a community, considering, that the same setup is working at full 1.2 GB/sec on the same configuration of machines, network cards, and a switch, where virtualization servers are using an old kernel 4.15.18-28-pve
Thank you
We have a following Proxmox setup:
One storage server, HP DL380P G8, CPU Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz, 32GB RAM
Storage Pool assembled on 8xSSD, Samsung SSD 870 QVO 1TB, which are put into the four mirrors with 2 drives each, and then mirrors are setup into the stripe, using ZFS
OS: OpenMediaVault (Debian based 6.0.28-3 (Shaitan)
Kernel: Linux 5.16.0-0.bpo.4-amd64
NFS: nfs-kernel-server/stable,now 1:1.3.4-6 amd64 [installed] - Ver 3.
Local storage speed on this server is measured via dd is at 2GB/Second
This storage is connected to three Proxmox virtualization nodes, via X520-DA1 (1x SFP+) 10GBit network cards through Mikrotik CRS317-1G-16S+, over NFS. Each virtualization node has the same network card as a storage server
Proxmox versions tested: 6.4-4, 7.2-4
pve-manager/6.4-4/337d6701 (running kernel: 5.4.106-1-pve)
Network connection speed measured via iperf from the virtualization machine to the storage at 7-10GBit/second, to ensure network condition is OK
Issue: Slow write performance when measured via dd over the NFS mount from the storage server connected to the virtualization node.
We expected maximum 1-1,2 GB/Second speeds over the dd while measuring on the node, however, we see numbers around 250MB-350MB/Second, which is four times slower than expected.
We tried different storage OS, such as plain Ubuntu, TrueNAS, and OpenMediaVault, and these all give us the same slow results.
We have also reset/changed the Mikrotik router, which is in the middle, however, this did not solve the issue.
What we observe though, is a delay between issuing dd command and network activity over the switch. It looks like it tries to pre-allocate the blocks needed for the write, bringing slow overall results
At this point we are lost and trying to get any help or advise from a community, considering, that the same setup is working at full 1.2 GB/sec on the same configuration of machines, network cards, and a switch, where virtualization servers are using an old kernel 4.15.18-28-pve
Thank you