[SOLVED] Abysmal disk Performance in KVM for iSCSCI Lun

Okyd

New Member
Aug 21, 2015
8
0
1
Canada
I am confused by the disparity between a simple test on storage on the host vs the vm.
I have a testing iSCSCI LUN and running simple dd benchmarks.

On my Proxmox host I run: dd if=/dev/zero of=/dev/sdd bs=1G count=45
and get 117 MB/s.
When Running the same benchmark on a VM which is connected to the same LUN, I get 15 MB/s

Is this to be expected? I am assuming not ... I would expect the performance to be much closer on both Host and KVM VM.

A little about my Setup:
I have 1 ProxMox Server and One SAN running FreeNAS with an 8 Disk RaidZ2 array.
Both Servers Are using an Intel NIC for iSCSI.

Any direction as to where I should look would be appreciated.
 
Last edited:
Code:
boot: cdn
bootdisk: scsi0
cores: 1
ide2: none,media=cdrom
memory: 1024
net0: e1000=42:ED:D8:16:3B:2D,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
scsi0: vm-200:vm-200-disk-1,size=102392M
scsi1: vm-test:vm-200-disk-1,size=49G
smbios1: uuid=ca24be29-edf3-40c3-adc0-9df17f1d5450
sockets: 1
 
you need to choose 'virtio' scsi controller in the options.

(Note that your can also use virtio disks instead scsi)
I'll give it a try this evening. I tries virtio disks with same issue - but will look for controller option.
 
Not much luck unfortunately.



Code:
root@pve1:~# cat /etc/pve/qemu-server/200.conf boot: cdn
bootdisk: virtio0
cores: 1
ide2: none,media=cdrom
memory: 1024
net0: e1000=42:ED:D8:16:3B:2D,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=ca24be29-edf3-40c3-adc0-9df17f1d5450
sockets: 1
virtio0: vm-200:vm-200-disk-1,size=102392M
virtio1: vm-test-disk-2:vm-200-disk-1,size=50G
virtio2: VMs:200/vm-200-disk-1.qcow2,format=qcow2,size=48G


Code:
root@Test01:~# dd if=/dev/zero of=/dev/vdb bs=1G count=2
2+0 records in
2+0 records out
2147483648 bytes (2.1 GB) copied, 122.425 s, 17.5 MB/s
 
I am confused by the disparity between a simple test on storage on the host vs the vm.
I have a testing iSCSCI LUN and running simple dd benchmarks.

On my Proxmox host I run: dd if=/dev/zero of=/dev/sdd bs=1G count=45
and get 117 MB/s.
...
Hi,
right, your benchmark is simple... much too simple because you are measuring caching.

So you can't compare this values.

Try with fdatasync:
Code:
dd if=/dev/zero of=/dev/sdd bs=1G count=45 conv=fdatasync
Udo
 
Result is similar, repeated the test on the Host:

Code:
root@pve1:~# dd if=/dev/zero of=/dev/sdh bs=1G count=45 conv=fdatasync
45+0 records in
45+0 records out
48318382080 bytes (48 GB) copied, 416.974 s, 116 MB/s

I realise these are not "real world" results - but I would expect to be able to saturate the Gigabit link like this on a VM if the Host is capable of doing so.
 
Code:
boot: cdn
bootdisk: scsi0
cores: 1
ide2: none,media=cdrom
memory: 1024
net0: e1000=42:ED:D8:16:3B:2D,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
scsi0: vm-200:vm-200-disk-1,size=102392M
scsi1: vm-test:vm-200-disk-1,size=49G
smbios1: uuid=ca24be29-edf3-40c3-adc0-9df17f1d5450
sockets: 1
Hi,
the error is the blocksize and the VM mem!

You use 1GB blocksize and the VM has only 1GB ram.

I tried the same with an 1.5GB VM:
Code:
root@sicherung:~ # dd if=/dev/zero of=/var/data/archiv/bigfile bs=1G count=2 conv=fdatasync
2+0 records in
2+0 records out
2147483648 bytes (2.1 GB) copied, 216.564 s, 9.9 MB/s


root@sicherung:~ # dd if=/dev/zero of=/var/data/archiv/bigfile bs=1M count=2048 conv=fdatasync
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 20.546 s, 105 MB/s
you see the difference.

Udo