Proxmox testing / KVM Production ready .. ? Misc q's..

fortechitsolutions

Renowned Member
Jun 4, 2008
449
51
93
Hi,

I have been playing with Proxmox in the past week a pair of testboxes (athlon 2ghz) in 'cluster' config, and I'm very happily impressed with things so far. I wanted to ask a couple of questions, about which things are not entirely clear.. a few gentle kicks in the right direction would be greatly apprecaited.

- Win200X guests seem to run very smoothly in KVM virtual environment, and I am very happy with how well integrated the console VNC works from the web interface. Installing the paravirtual NIC drivers was straightforward and worked well.

I'm curious, are there any other 'special drivers' one should consider adding (for disk in particular?) to enhance performance of the KVM hosted windows virtual machine? I know many other virtualization platforms encourage both disk and nic drivers to enhance performance inside the virtual machine ... hence this question.

(Possibly KVM implements disk access.. in a different manner .. which makes this less of an issue?)

- I am curious, in general, is KVM "production ready" way to deploy win200X virtual servers? Would I be ill advised to start deploying real production windows systems using KVM / Proxmox environment? Has anyone been doing this .. with success?

-I wasn't clear, does the underlying proxmox environment (Debian Linux) have standard linux software raid MD support (raid1 in particular) in the kernel, ie, could I install proxmox on hardware with 2 x local disks and setup software raid (either at install time, or via post-install migration to SWraid.. typically if the kernel on the 'appliance' lacks the raid1md support, this is a non-starter). Alternately, if hardware raid was the only way to get disk raid on the proxmox host, is there a list somewhere outlining what hardware raid controllers are supported . ?

I guess, for that matter: Can the underlying proxmox debian host be managed 'by hand / via console' to customize such things, without too much risk of breaking proxmox entirely? ie, build new kernel with SW raid / HW raid controller support as required ?


-regarding migrating KVM based virtual hosts between cluster nodes in proxmox: I assume this is done the same "general manner" as OpenVZ, ie, via "the wire" (whatever ether connectivity is between the nodes); throughput will depend on the wire speed (100mb vs gig-ether for example) and thus migration time will vary with actual disk-image-size (actual cow2 filesize) ... ? .. so clearly a biggish image of misc. gigs in size .. will take a while to push across ..

In general, dare I ask, are there any general comments about Win200X host performance running inside KVM virtualization, as compared to running withing Xen-style hypervisors ? ie, on the same hardware, is there any significant difference in performance to be expected between KVM based vs Xen-based virtualized guest OS (windows in particular). (Clearly I prefer to use OpenVZ for linux guests, since the performance is so amazing / disk use is lighter /etc..)

In terms of OpenVZ host management: Is it intended to do "more complex" stuff by having a staging openVZ server (a more "classic" linux environment) where vz containers are prepared; then they are brought over to proxmox for production ? Or do people simply do their less-trivial ovz management by SSH console on the proxmox host? (For example, tuning the parameters of OpenVZ in terms of resource allocation; installation of atypical software such as via "vzyum" type management done from the physical host that manages the vz containers; etc...?)


Sorry for the rather rambline post / wide ranging questions. Any/all help is certainly greatly appreciated.


--Tim Chipman
 
addendum

I have just re-read the forums more closely, and found some discussion on a few of the points I raised, I gather, approx thus,

-paravirt disk access is an upstream KVM feature not yet implemented on proxmox (I am a bit unclear what impact this has on actual performance; if this is planned for proxmox -- it doesn't appear on the 'roadmap'; or if anyone has comments on this topic still

-production ready; "is at your discretion" but it is suggested to be solid enough for production (?)

Certainly any further info/clarification as per my initial posting .. still would be greatly appreciated. Many thanks! --Tim
 
I'm curious, are there any other 'special drivers' one should consider adding (for disk in particular?) to enhance performance of the KVM hosted windows virtual machine? I know many other virtualization platforms encourage both disk and nic drivers to enhance performance inside the virtual machine ... hence this question.

(Possibly KVM implements disk access.. in a different manner .. which makes this less of an issue?)

We will also support paravirt HDs in 1.0

- I am curious, in general, is KVM "production ready" way to deploy win200X virtual servers? Would I be ill advised to start deploying real production windows systems using KVM / Proxmox environment? Has anyone been doing this .. with success?

KVM is already quite stable. KVM uses large parts of the standard linux kernel (as compared to Xen or other technologies). For that reason, we expect that KVM technology will be the most stable of all.

-I wasn't clear, does the underlying proxmox environment (Debian Linux) have standard linux software raid MD support (raid1 in particular) in the kernel, ie, could I install proxmox on hardware with 2 x local disks and setup software raid (either at install time, or via post-install migration to SWraid.. typically if the kernel on the 'appliance' lacks the raid1md support, this is a non-starter). Alternately, if hardware raid was the only way to get disk raid on the proxmox host, is there a list somewhere outlining what hardware raid controllers are supported . ?

I guess, for that matter: Can the underlying proxmox debian host be managed 'by hand / via console' to customize such things, without too much risk of breaking proxmox entirely? ie, build new kernel with SW raid / HW raid controller support as required ?

Currently Software raid is compiled as loadable modules - so you will have problem if you want to boot from such disk (load the modules first).

But for various reasons we dont want to support software raid right now (read forum postings about that).

There is no list of supported HW Raid controller, but it should be easy to find out if a controller is supported by standard linux 2.6.24. Or just ask on this list if somebody has expieriences wit a specific model.

-regarding migrating KVM based virtual hosts between cluster nodes in proxmox: I assume this is done the same "general manner" as OpenVZ, ie, via "the wire" (whatever ether connectivity is between the nodes); throughput will depend on the wire speed (100mb vs gig-ether for example) and thus migration time will vary with actual disk-image-size (actual cow2 filesize) ... ? .. so clearly a biggish image of misc. gigs in size .. will take a while to push across ..

Yes, large disks will take some time.

In general, dare I ask, are there any general comments about Win200X host performance running inside KVM virtualization, as compared to running withing Xen-style hypervisors ? ie, on the same hardware, is there any significant difference in performance to be expected between KVM based vs Xen-based virtualized guest OS (windows in particular). (Clearly I prefer to use OpenVZ for linux guests, since the performance is so amazing / disk use is lighter /etc..)

There are quite many 'missleading' performance benchmarks out there. I suggest you meassure the performanc yourself using 'your' application.

In terms of OpenVZ host management: Is it intended to do "more complex" stuff by having a staging openVZ server (a more "classic" linux environment) where vz containers are prepared; then they are brought over to proxmox for production ? Or do people simply do their less-trivial ovz management by SSH console on the proxmox host? (For example, tuning the parameters of OpenVZ in terms of resource allocation; installation of atypical software such as via "vzyum" type management done from the physical host that manages the vz containers; etc...?)

From a security perspective we do not support multiple users on the PVE Host (local users can gather security related infos). So I guess using a 'staging' server is a good idea.

- Dietmar
 
Many thanks for the comprehensive reply - much appreciated. All very good to know, all very clear.

Certainly, WRT performance / benchmarks, I appreciate the importance of pre-deployment testing in a given configuration to establish 'reasonable and meaningful baselines'. I guess, I was curious if there are 'broad and general' guidelines, such as,

Generally Slower-> Generally Faster:

"More SW Emulation" based virtualization
MS VirtServer 2003
Free Edition VMWare Server
MS HyperV 2008 hypervisor
Non-Free VMWare server
VirtualBox
Xen derived Hypervisor (Xen free/not-free/Virtual Iron/etc)
Container-based virtualizers (openVZ / Virtuozzo / Solaris containers)
"Native Performance" (no virtualization)

.. and if there was any general feeling for where KVM Based approaches might fit into this (hypothetical) scale above.

Many thanks for your help clarifying these topics - certainly all the work on this project is greatly appreciated; it makes KVM much more accessible I believe, and the integration with OpenVZ is great.


Tim Chipman
 
There is no list of supported HW Raid controller, but it should be easy to find out if a controller is supported by standard linux 2.6.24. Or just ask on this list if somebody has expieriences wit a specific model.

Although there's nothing inherently wrong with the 3ware controllers themselves (they perform better under BSD and Windows) you should probably get some kind of Areca controller instead of 3ware for a server that is intended to run Linux.

I'm not saying you shouldn't look at the more expensive things like Adaptec and LSI, but the above is a specific recommendation for the cheaper end of the hardware RAID controller scale which is where I solidly stay and have always been perfectly content in doing so.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!