I'm running 3 proxmox 3.4 nodes using NFS shared storage with a dedicated 1GB network switch.
My vms are qcow2 based.
I'm experiencing very slow performance.
VMs (both windows and linux) are very slow and usually hangs on iowait but when monitoring the situation on the NAS side there is no such load as expected: ethernet usage is about 20/30 MBit/s.
I don't think the problem is related only to network because iperf get a reasonable speed
Also dd on the NAS filesystem get a very better result:
Ok, bottleneck can be the NFS/qcow2 combination, but possible those poor results??
Code:
root@lnxvt10:~# pveversion
pve-manager/3.4-11/6502936f (running kernel: 2.6.32-43-pve)
root@lnxvt10:~# mount | grep 192.168.100.200
192.168.100.200:/mnt/volume0-zr2/proxmox1/ on /mnt/pve/freenas2-proxmox1 type nfs4 (rw,noatime,vers=4,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.100.30,minorversion=0,local_lock=none,addr=192.168.100.200)
My vms are qcow2 based.
I'm experiencing very slow performance.
VMs (both windows and linux) are very slow and usually hangs on iowait but when monitoring the situation on the NAS side there is no such load as expected: ethernet usage is about 20/30 MBit/s.
I don't think the problem is related only to network because iperf get a reasonable speed
Code:
Client connecting to 192.168.100.200, TCP port 5001
TCP window size: 19.6 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.100.30 port 56835 connected with 192.168.100.200 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-30.0 sec 3.26 GBytes 933 Mbits/sec
Also dd on the NAS filesystem get a very better result:
Code:
[root@freenas2] /mnt/volume0-zr2/proxmox1# dd if=/dev/zero of=file.dd bs=320M count=10
10+0 records in
10+0 records out
3355443200 bytes transferred in 16.386541 secs (204768244 bytes/sec)
Ok, bottleneck can be the NFS/qcow2 combination, but possible those poor results??