Q: Current state of KVM & SMP Virtualized hosts

fortechitsolutions

Renowned Member
Jun 4, 2008
449
51
93
Hi, just a quick question: I gather from reading in threads in the past few weeks, that there are some stability issues if you choose to manually set a KVM Virtual Environment to have mutliple CPUs (ie, 2 or more). I believe these comments pre-dated the new proxmox release (which updated KVM) so I wanted to followup / confirm specifically to ask,

- does the new KVM in the latest proxmoxVE have any change of virtual hosts with SMP ? (ie, better / improved / 'production' ready stability ?)

... or are there still known gotchas / issues which make this a less desirable arrangement at present... ?

.. and dare I ask, is this issue in general .. the current state of things in the upstream / latest KVM release; or rather is this something that is waiting a new version of KVM for ProxmoxVE to help address the topic .. ?

I ask in part because I am looking to do a virtualized server deployment / migration in the near future (with both OpenVZ virtual hosts, and also some Win2003 KVM virtual hosts -- on the same hardware). However -- some of the Win2003 instances will likely benefit significantly from SMP / multiCPU resources ... so I'm rather keen to clarify this detail.

I do realize, of course, that Proxmox is still officially in beta .. and not formally recommended for production use. Nor am I implying that any features not present are a failing! :-) Rather I just wish to clarify / understand the current state of affairs in the latest ProxmoxVE.

Many thanks,


---Tim Chipman
 
Hi, just a quick question: I gather from reading in threads in the past few weeks, that there are some stability issues if you choose to manually set a KVM Virtual Environment to have mutliple CPUs (ie, 2 or more). I believe these comments pre-dated the new proxmox release (which updated KVM) so I wanted to followup / confirm specifically to ask,

- does the new KVM in the latest proxmoxVE have any change of virtual hosts with SMP ? (ie, better / improved / 'production' ready stability ?)

... or are there still known gotchas / issues which make this a less desirable arrangement at present... ?

.. and dare I ask, is this issue in general .. the current state of things in the upstream / latest KVM release; or rather is this something that is waiting a new version of KVM for ProxmoxVE to help address the topic .. ?

I ask in part because I am looking to do a virtualized server deployment / migration in the near future (with both OpenVZ virtual hosts, and also some Win2003 KVM virtual hosts -- on the same hardware). However -- some of the Win2003 instances will likely benefit significantly from SMP / multiCPU resources ... so I'm rather keen to clarify this detail.

I do realize, of course, that Proxmox is still officially in beta .. and not formally recommended for production use. Nor am I implying that any features not present are a failing! :-) Rather I just wish to clarify / understand the current state of affairs in the latest ProxmoxVE.

Many thanks,


---Tim Chipman

we tested win2003, xp, win2008 and vista with 2 cpu´s, none of them worked stable. Also, using SCSI as boot devices does not provide stability. so on the windows side you can only use 1 cpu and IDE disks (and VIRTIO network) and this is quite stable - and fast - I have no benchmarks here to publish but KVM is very fast.

Using KVM for Linux, e.g. Ubuntu 8.04 (i386) worked without any problems. we tested 4 cpu´s, SCSI, disk, VIRTIO disks, VIRTIO network.

If you can live with the current restrictions on KVM windows guest you can think of production use.

As soon as we see major improvements on the KVM side we will release this asap. So as KVM development is extremly fast, I assume these issues could be solved soon.

for all other features, see our roadmap.
 
many thanks for the followup/clarification. Good to know the SMP-virtual-KVM issue is with Windows guests (not linux). Alas for the project I have right now, this is a requirement, so likely I'll have to deploy another solution on this one. Ah well.

My feelings of KVM windows guest performance are consistent with yours, ie, very good. In case it is of interest to others: I did some benchmarks with a number of virtualization solutions last month (June2008) - all running stock install win2003 as the guest - and proxmox / kvm fared very well.

In case others might want to see the results, I posted them to a website of mine, they are visible at the URL,

http://sandbox.fortechitsolutions.ca/pmwiki.php?n=Testing.Jun-2008-virtualization-basic-benchmarking


---Tim
 
VERY surprised with performance of VZHOST/Virtuozzo

Maybe you should test when the system is under load, for example when 20 VM are running.

Doing benchmark withe a single VM will not show the advantages of openvz.

- Dietmar
 
we tested win2003, xp, win2008 and vista with 2 cpu´s, none of them worked stable. Also, using SCSI as boot devices does not provide stability. so on the windows side you can only use 1 cpu and IDE disks (and VIRTIO network)

Did anything change here? This post is 6 month old - and I've read on KVM mailing list that using SCSI for Windows should provide much better disk performance.
On the other hand, you write that using SCSI for Windows does not provide stability. Is this still valid with the latest release of Proxmox VE 1.1?
 
Did anything change here? This post is 6 month old - and I've read on KVM mailing list that using SCSI for Windows should provide much better disk performance.
On the other hand, you write that using SCSI for Windows does not provide stability. Is this still valid with the latest release of Proxmox VE 1.1?

As far as I know scsi for KVM is still unstable, but I did not personally tested this extensively for all windows OS types. Can you point me to this KVM discussion?

I am waiting for virtio block drivers for windows, that would be cool. But I also have to say that I do not really see bad performance on IDE anyway.

For Proxmox VE 2.0 we will include the possiblity to use devices directly instead of disk images files which reduces one virtualization layer - this can speed up IO. (you can also use this now, but you need to configure this on the commandline).
 
As far as I know scsi for KVM is still unstable, but I did not personally tested this extensively for all windows OS types. Can you point me to this KVM discussion?

Here are some links:

http://article.gmane.org/gmane.comp.emulators.kvm.devel/22767/match=scsi

(scroll down to see how SCSI write performance compares, not Windows-specific though)


Some old one comparing IDE with SCSI - results of SCSI and IDE are similar:

http://thread.gmane.org/gmane.comp.emulators.kvm.devel/3373/focus=3376


SCSI problems on Windows seem to happen in Debian only - what does Proxmox use?

http://thread.gmane.org/gmane.comp.emulators.kvm.devel/27378


I can't find any performance tests for Windows using SCSI. Here is some "hint" by Avi Kivity that "SCSI emulation should provide decent performance":

http://thread.gmane.org/gmane.comp.emulators.kvm.devel/14606/focus=15165


And some confirmation that SCSI on Windows is unstable:

http://thread.gmane.org/gmane.comp.emulators.kvm.devel/27155/focus=27161


I am waiting for virtio block drivers for windows, that would be cool. But I also have to say that I do not really see bad performance on IDE anyway.

For Proxmox VE 2.0 we will include the possiblity to use devices directly instead of disk images files which reduces one virtualization layer - this can speed up IO. (you can also use this now, but you need to configure this on the commandline).

I don't use Proxmox yet, but wanted to evaluate it and possible replace VMware servers with it, as they give me all sorts of headaches...


It would be really nice to be able to use devices directly (without having to go to the command line).

When implementing that feature, note that iSCSI devices are best accessed as /dev/disk/by-path/ paths (yes, they are looong paths), as they are static (and /dev/sdX are "dynamic"). So the interface should allow to use them easily.

BTW, are there any timelines for Proxmox VE 2.0? A month, two months, six months, more? Not sure? ;)
 
It would be really nice to be able to use devices directly (without having to go to the command line).

When implementing that feature, note that iSCSI devices are best accessed as /dev/disk/by-path/ paths (yes, they are looong paths), as they are static (and /dev/sdX are "dynamic"). So the interface should allow to use them easily.

BTW, are there any timelines for Proxmox VE 2.0? A month, two months, six months, more? Not sure? ;)

SCSI on windows still need some improvements so I do not recommend it yet. But I did personally migrations from vmware server to KVM using IDE (mostly win2003 with exch2003) without problems. Just follow this wiki guide.

Schedule for 2.0: The plan is to release something around mid of 2009, some parts could be earlier - currently things are changing very fast (in a very positive manner) so this is subject to change.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!