Hello,
I have a problem with NFS performance from VE 4.3:
I can not understand why vm only can reach only 1.5K IOPS.
Proxmox
Version: pve-manager/4.4-5/c43015a5 (running kernel: 4.4.35-2-pve)
CPU:
# grep 'E5-2680 v3' /proc/cpuinfo |wc -l
48
RAM:
grep MemTotal /proc/meminfo
MemTotal: 264036248 kB
Local disks: SATA with RAID 1
Load: 14:22:05 up 22 days, 17:18, 1 user, load average: 8.22, 7.80, 7.56
Network: 10Gbps x 2 Active/Active, mode: balance-rr
NFS
Type: full SSD
Network: 10Gbps x 2 Active/Active, mode: balance-rr
Latency from proxmox server: 0.090 ms
VM of benchmark
vCPU: 16 cores
RAM: 12Gb
Disk: 50G (stored in NFS)
Driver: SCSI and VirtIO was tested with the same result.
Benchmark command
time fio --name=test --rw=randread --size=256MB --iodepth=1 --numjobs=64 --directory=/tmp/ --bs=4k --group_reporting --direct=1 --time_based --runtime=3600
--directory=/tmp/ depend of test. When i test with NFS mounted in vm, i use mount point directory instad /tmp/.
Why IO performance from vm is around 7% compared with IO performance using NFS directly?
Thanks.
I have a problem with NFS performance from VE 4.3:
- iperf test between 2 proxmox nodes: 5Gbps (it's OK)
- IO benchmark from proxmox server to NFS: 22K IOPS (OK)
- IO benchmark from proxmox vm mounting NFS directly: 22K IOPS (OK)
- IO benchmakr from proxmox vm using local disk that it's stored in nfs from proxmox server: 1.5K IOPS (Very bad)
I can not understand why vm only can reach only 1.5K IOPS.
Proxmox
Version: pve-manager/4.4-5/c43015a5 (running kernel: 4.4.35-2-pve)
CPU:
# grep 'E5-2680 v3' /proc/cpuinfo |wc -l
48
RAM:
grep MemTotal /proc/meminfo
MemTotal: 264036248 kB
Local disks: SATA with RAID 1
Load: 14:22:05 up 22 days, 17:18, 1 user, load average: 8.22, 7.80, 7.56
Network: 10Gbps x 2 Active/Active, mode: balance-rr
NFS
Type: full SSD
Network: 10Gbps x 2 Active/Active, mode: balance-rr
Latency from proxmox server: 0.090 ms
VM of benchmark
vCPU: 16 cores
RAM: 12Gb
Disk: 50G (stored in NFS)
Driver: SCSI and VirtIO was tested with the same result.
Benchmark command
time fio --name=test --rw=randread --size=256MB --iodepth=1 --numjobs=64 --directory=/tmp/ --bs=4k --group_reporting --direct=1 --time_based --runtime=3600
--directory=/tmp/ depend of test. When i test with NFS mounted in vm, i use mount point directory instad /tmp/.
Why IO performance from vm is around 7% compared with IO performance using NFS directly?
Thanks.