Hello Proxmox forum
I am new to virtualization. I have tried to google and RTFM, but have some difficulties in finding the answers I am looking for. I hope someone can clarify a few points for me.
I am trying to access a device directly from a kvm guest. The device is /dev/sdc seen from the host. What I want is for this device to belong to guest "testkvm2" with a proper I/O performance, comparable to the performance to the device directly from the host. It should not be accessed from the host or any other guest, and it is OK that we cannot migrate to another host.
First I found out I could do it this way:
qm set 106 --virtio1 /dev/sdc
Or I could add "virtio1: /dev/sdc" to /etc/qemu-server/106.conf.
However, my I/O performance was not impressive, especially with small block sizes.
Then I found this document:
http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/topic/liaav/LPC/LPCKVMSSPV2.1.pdf
which has nice plots and basically tells me to use virtio, cache=no, aio=native, and deadline scheduler on both host and guest.
cache=no could be set with the qm command, and I know how to change the scheduler.
However, to test with aio=native I had to run kvm manually from the command line:
/usr/bin/kvm -monitor unix:/var/run/qemu-server/106.mon,server,nowait -vnc unix:/var/run/qemu-server/106.vnc,password -pidfile /var/run/qemu-server/106.pid -daemonize -usbdevice tablet -name testkvm2 -smp sockets=3,cores=1 -nodefaults -boot menu=on -vga cirrus -tdf -k da -drive file=/var/lib/vz/template/iso/debian-6.0.1a-amd64-CD-1.iso,if=ide,index=2,media=cdrom -drive file=/var/lib/vz/images/106/vm-106-disk-1.qcow2,if=virtio,index=0,boot=on -drive file=/dev/sdc,if=virtio,index=1,cache=none,aio=native -m 512 -netdev type=tap,id=vlan0d0,ifname=vmtab106i0d0,script=/var/lib/qemu-server/bridge-vlan -device virtio-net-pci,mac=4E:5C0:55:7E:E4,netdev=vlan0d0
I found most of the command parameters by saying "ps ax | grep kvm" while running a kvm I had started from proxmox GUI, I just
added the aio=native.
qm does not recognize the option if I add it in the /etc/qemu-server/106.conf file or on the command line.
So my first question is: If I decide to run with aio=native, how can I enable it and still use the gui to start and stop my kvm?
I am running qemu-server 1.1-25 on debian Lenny, and I noticed that there is a never version in the repository now, will that support aio=native in qm? I don't want to upgrade just for the sake of upgrading, since I am already running another kvm in production. Is there a changelog somewhere?
My second question is whether I should use LVM on my raw partition. That way, I can add it from within the GUI, and off course it gives me some flexibility later if I want to expand the volume by adding another disk. However, will it make a difference in performance in any way? I still cannot see that I can set any options like cache=no,aio=native in the gui when I add an lvm group.
My third question is if there is anyway I can protect /dev/sdc from the host os and let it know that it is already mounted in a virtual host.
A suggestion of a whole different approach is welcome, too.
Thanks in advance for enlightening me on any of these points.
I am new to virtualization. I have tried to google and RTFM, but have some difficulties in finding the answers I am looking for. I hope someone can clarify a few points for me.
I am trying to access a device directly from a kvm guest. The device is /dev/sdc seen from the host. What I want is for this device to belong to guest "testkvm2" with a proper I/O performance, comparable to the performance to the device directly from the host. It should not be accessed from the host or any other guest, and it is OK that we cannot migrate to another host.
First I found out I could do it this way:
qm set 106 --virtio1 /dev/sdc
Or I could add "virtio1: /dev/sdc" to /etc/qemu-server/106.conf.
However, my I/O performance was not impressive, especially with small block sizes.
Then I found this document:
http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/topic/liaav/LPC/LPCKVMSSPV2.1.pdf
which has nice plots and basically tells me to use virtio, cache=no, aio=native, and deadline scheduler on both host and guest.
cache=no could be set with the qm command, and I know how to change the scheduler.
However, to test with aio=native I had to run kvm manually from the command line:
/usr/bin/kvm -monitor unix:/var/run/qemu-server/106.mon,server,nowait -vnc unix:/var/run/qemu-server/106.vnc,password -pidfile /var/run/qemu-server/106.pid -daemonize -usbdevice tablet -name testkvm2 -smp sockets=3,cores=1 -nodefaults -boot menu=on -vga cirrus -tdf -k da -drive file=/var/lib/vz/template/iso/debian-6.0.1a-amd64-CD-1.iso,if=ide,index=2,media=cdrom -drive file=/var/lib/vz/images/106/vm-106-disk-1.qcow2,if=virtio,index=0,boot=on -drive file=/dev/sdc,if=virtio,index=1,cache=none,aio=native -m 512 -netdev type=tap,id=vlan0d0,ifname=vmtab106i0d0,script=/var/lib/qemu-server/bridge-vlan -device virtio-net-pci,mac=4E:5C0:55:7E:E4,netdev=vlan0d0
I found most of the command parameters by saying "ps ax | grep kvm" while running a kvm I had started from proxmox GUI, I just
added the aio=native.
qm does not recognize the option if I add it in the /etc/qemu-server/106.conf file or on the command line.
So my first question is: If I decide to run with aio=native, how can I enable it and still use the gui to start and stop my kvm?
I am running qemu-server 1.1-25 on debian Lenny, and I noticed that there is a never version in the repository now, will that support aio=native in qm? I don't want to upgrade just for the sake of upgrading, since I am already running another kvm in production. Is there a changelog somewhere?
My second question is whether I should use LVM on my raw partition. That way, I can add it from within the GUI, and off course it gives me some flexibility later if I want to expand the volume by adding another disk. However, will it make a difference in performance in any way? I still cannot see that I can set any options like cache=no,aio=native in the gui when I add an lvm group.
My third question is if there is anyway I can protect /dev/sdc from the host os and let it know that it is already mounted in a virtual host.
A suggestion of a whole different approach is welcome, too.
Thanks in advance for enlightening me on any of these points.