Hi,
I have been playing with Proxmox in the past week a pair of testboxes (athlon 2ghz) in 'cluster' config, and I'm very happily impressed with things so far. I wanted to ask a couple of questions, about which things are not entirely clear.. a few gentle kicks in the right direction would be greatly apprecaited.
- Win200X guests seem to run very smoothly in KVM virtual environment, and I am very happy with how well integrated the console VNC works from the web interface. Installing the paravirtual NIC drivers was straightforward and worked well.
I'm curious, are there any other 'special drivers' one should consider adding (for disk in particular?) to enhance performance of the KVM hosted windows virtual machine? I know many other virtualization platforms encourage both disk and nic drivers to enhance performance inside the virtual machine ... hence this question.
(Possibly KVM implements disk access.. in a different manner .. which makes this less of an issue?)
- I am curious, in general, is KVM "production ready" way to deploy win200X virtual servers? Would I be ill advised to start deploying real production windows systems using KVM / Proxmox environment? Has anyone been doing this .. with success?
-I wasn't clear, does the underlying proxmox environment (Debian Linux) have standard linux software raid MD support (raid1 in particular) in the kernel, ie, could I install proxmox on hardware with 2 x local disks and setup software raid (either at install time, or via post-install migration to SWraid.. typically if the kernel on the 'appliance' lacks the raid1md support, this is a non-starter). Alternately, if hardware raid was the only way to get disk raid on the proxmox host, is there a list somewhere outlining what hardware raid controllers are supported . ?
I guess, for that matter: Can the underlying proxmox debian host be managed 'by hand / via console' to customize such things, without too much risk of breaking proxmox entirely? ie, build new kernel with SW raid / HW raid controller support as required ?
-regarding migrating KVM based virtual hosts between cluster nodes in proxmox: I assume this is done the same "general manner" as OpenVZ, ie, via "the wire" (whatever ether connectivity is between the nodes); throughput will depend on the wire speed (100mb vs gig-ether for example) and thus migration time will vary with actual disk-image-size (actual cow2 filesize) ... ? .. so clearly a biggish image of misc. gigs in size .. will take a while to push across ..
In general, dare I ask, are there any general comments about Win200X host performance running inside KVM virtualization, as compared to running withing Xen-style hypervisors ? ie, on the same hardware, is there any significant difference in performance to be expected between KVM based vs Xen-based virtualized guest OS (windows in particular). (Clearly I prefer to use OpenVZ for linux guests, since the performance is so amazing / disk use is lighter /etc..)
In terms of OpenVZ host management: Is it intended to do "more complex" stuff by having a staging openVZ server (a more "classic" linux environment) where vz containers are prepared; then they are brought over to proxmox for production ? Or do people simply do their less-trivial ovz management by SSH console on the proxmox host? (For example, tuning the parameters of OpenVZ in terms of resource allocation; installation of atypical software such as via "vzyum" type management done from the physical host that manages the vz containers; etc...?)
Sorry for the rather rambline post / wide ranging questions. Any/all help is certainly greatly appreciated.
--Tim Chipman
I have been playing with Proxmox in the past week a pair of testboxes (athlon 2ghz) in 'cluster' config, and I'm very happily impressed with things so far. I wanted to ask a couple of questions, about which things are not entirely clear.. a few gentle kicks in the right direction would be greatly apprecaited.
- Win200X guests seem to run very smoothly in KVM virtual environment, and I am very happy with how well integrated the console VNC works from the web interface. Installing the paravirtual NIC drivers was straightforward and worked well.
I'm curious, are there any other 'special drivers' one should consider adding (for disk in particular?) to enhance performance of the KVM hosted windows virtual machine? I know many other virtualization platforms encourage both disk and nic drivers to enhance performance inside the virtual machine ... hence this question.
(Possibly KVM implements disk access.. in a different manner .. which makes this less of an issue?)
- I am curious, in general, is KVM "production ready" way to deploy win200X virtual servers? Would I be ill advised to start deploying real production windows systems using KVM / Proxmox environment? Has anyone been doing this .. with success?
-I wasn't clear, does the underlying proxmox environment (Debian Linux) have standard linux software raid MD support (raid1 in particular) in the kernel, ie, could I install proxmox on hardware with 2 x local disks and setup software raid (either at install time, or via post-install migration to SWraid.. typically if the kernel on the 'appliance' lacks the raid1md support, this is a non-starter). Alternately, if hardware raid was the only way to get disk raid on the proxmox host, is there a list somewhere outlining what hardware raid controllers are supported . ?
I guess, for that matter: Can the underlying proxmox debian host be managed 'by hand / via console' to customize such things, without too much risk of breaking proxmox entirely? ie, build new kernel with SW raid / HW raid controller support as required ?
-regarding migrating KVM based virtual hosts between cluster nodes in proxmox: I assume this is done the same "general manner" as OpenVZ, ie, via "the wire" (whatever ether connectivity is between the nodes); throughput will depend on the wire speed (100mb vs gig-ether for example) and thus migration time will vary with actual disk-image-size (actual cow2 filesize) ... ? .. so clearly a biggish image of misc. gigs in size .. will take a while to push across ..
In general, dare I ask, are there any general comments about Win200X host performance running inside KVM virtualization, as compared to running withing Xen-style hypervisors ? ie, on the same hardware, is there any significant difference in performance to be expected between KVM based vs Xen-based virtualized guest OS (windows in particular). (Clearly I prefer to use OpenVZ for linux guests, since the performance is so amazing / disk use is lighter /etc..)
In terms of OpenVZ host management: Is it intended to do "more complex" stuff by having a staging openVZ server (a more "classic" linux environment) where vz containers are prepared; then they are brought over to proxmox for production ? Or do people simply do their less-trivial ovz management by SSH console on the proxmox host? (For example, tuning the parameters of OpenVZ in terms of resource allocation; installation of atypical software such as via "vzyum" type management done from the physical host that manages the vz containers; etc...?)
Sorry for the rather rambline post / wide ranging questions. Any/all help is certainly greatly appreciated.
--Tim Chipman