Proxmox (KVM) vs Hyper-V as failover cluster hosts

achekalin

Member
Jun 17, 2011
74
0
6
Don't want to start another holy war, but I really need arguments to implement Proxmox and not Hyper-V on some of my company's servers. My colleagues are to give Hyper-v a chance, I'm not.

Our aim is to shorten VMs downtime in case of hardware failure. That is, if in cluster of two visualization hosts (VM1 and VH2) VH1 goes offline and only VH2 stays alive, I need to have VMs moved from VH1 to VH2 automatically, with no admin intervention.

It looks like Hyper-V can do that and Proxmox can't (it can do live migration but HA will be available in Proxmox version 2.0+, so "maybe someday in the future"). Have someone tested that and can share his/her experience?

P.S. Personally I like Proxmox much more. So I need either to know how to implement VHs failover with Proxmox, or to know for sure that it is impossible to do and we need to use Hyper-V.
 
Just a few arguments.

  • Proxmox VE is open source, standard technologies, Hyper-V not. A vendor lock on the base infrastructure is not wanted in most situations.
  • To get all failover features in Hyper-V you need high price server licenses.
  • Hyper-V has very limited support for Linux Guests (but yes, good MS support).
  • Hyper-V has no containers like OpenVZ
Btw, the first Proxmox VE 2.0 beta is expected in a few weeks, which will be the basis for HA setups.
 
If I know how to implement that I'd make HA myself. I also won't put beta production, so I'd better learn how the HA will be done in 2.x and maybe try to port it to my 1.8.

Hyper-V is surely vendor-lock for us, but as long it is free and documentation is available we can live with it (I mean this is not the most serious argument).
Bad news is limited support for linux guests (they claim the support RHEL-based distros, and we use some Centos servers, so i was expect it to work good).
We don't use OpenVZ as plan to use live migration and prefer to live with 'hard limits'-based hypervisor vs OpenVZ with its 'best effort' approach.

Ok, are there any ways to lake a look at Proxmox 2.0 branch now? Or maybe simply read on how HA will be done there?
 
We used DRBD with Proxmox 1.8 and can do live migrations with ease.

Proxmox 1.8 does not have a feature to automatically start a VM on another node.
We setup a process to copy the qemu config files to a separate folder on the other node so we are prepared for a node failure.
Provided the config files are available a few command line commands is all that is needed to bring the VM up on the remaining node.
I see no reason that could not be automated if one desired, we just get an alert and deal with the issue manually.

I am eagerly waiting for Proxmox 2.0
Until then DRBD + Proxmox 1.8 is great.
 
I've used DRBD and live migration before I realized I can not easily restore VMs that was running on failed node. Even if we loose slave node, not the master one, I just can't go to web gui and click 'migration', so even if node is dead all VMs on it still 'on it', and I can't start it on some other node in the way that Proxmox 'understand'. What I can is to periodically copy qemu files and in case of troubles simple start VM by hands on any node that alive using image from DRBD and config from backup. Too messy, and (in fact) when dead node goes up again (say, it was simple disconnected from network due to bad ethernet cable, and now we re-connect it with another cable), it'll try to run the VMs on it again, and all our 'run-by-hand-on-other-nodes' VMs are to be point-of-conflict. Messy, as I've already said.

I also found no way to use standard backup feature to DRBD volume if I use it with no filesystem on it. If I do mkfs on it, then I can't mount it on several nodes at once (fs goes corrupted).

What's your setup? How DRBD helps you in the case?
 
We have APC PDU and are able to remotely turn on/off individual servers.
This solves the issue with the dead node coming back up and causing issues, if it is turned off it can not come back up.
STONITH (Shoot The Other Node In The Head) has been around for a long time to deal with this is an automated manner.
When bringing up the dead node after repair it would be disconnected completely from the other node until we had made the necessary changes to ensure no conflicts.
If we ever face the situation where the master node failed and only the slave remained I would break the cluster config so I could still use the proxmox interface on the salve.

Is this an ideal setup? Not really, but it is very reliable and not difficult to deal with.
2.0 will certainly make things better but we are still stuck with 1.8 for now.


I would not recommend backing up data on DRBD to DRBD, if something ever happens to your DRBD volume you would also loose your backups.
Each of our proxmox nodes has one hot-swap bay connected to a motherboard SATA port in AHCI mode.
We used cryptsetup to encrypt the disk in that bay combined with a vzdump hook script that mounts/dismounts the encrypted volume when vzdump is run.
Each week we swap out the backup disks and take them offsite.

We even setup a machine in our office that is identical to the servers at the datacenter.
That one we setup so we can insert a backup disk and the latest backup file of each VM on that disk is automatically restored, no human intervention needed other than inserting a backup disk.
After being restored we can start them up to ensure that the backups are working well.
This machine also serves as spare parts for our production machines and for testing processes before using them on production.
Verification that the backups work is an often overlooked aspect of backup processes, we were able to make it brain dead simple with a little bit of scripting.

With Proxmox we were able to solve our problems and come up with simple methods to deal with various issues that might come up, try doing that with Hyper-V.
 
This is a Proxmox forum, so I think it's biased anyway ;-)
I've been using Hyper-V (longer) due to decision that was taken there when we virtualized and Proxmox (more recently) on another workplace.
I hope I can share some experiences here - they may not apply to your environment...

Clustering (not much experience, but some facts)
  • Hyper-V with Live Migration also requires clustering like Proxmox but with a significant difference: It requires a shared storage
    -> In other words: a SAN with FC(oE)/ iSCSI or something like Infiniband.
  • In general shared storage makes sense in bigger environments but beware: You need to get everything set up, have the personnel and knowledge how to configure and maintain it correctly (LUN mapping, multipathing etc...). If you go FC for high bandwidht/low latency: Be prepared to shell out a lot of money on targets, HBAs and respective Switches.
  • You can use shared storage for clustering with Proxmox too - but you are not required initially since you can use DRBD.
    (Investing in a 10G NIC for DRBD replication could be a gould invest)
  • Setting up a Hyper-V Clusters requires you to join the Cluster to a Windows Domain (have the DCs on the cluster, and you bite yourself in the tail) - no need for Windows Domain overhead on Proxmox
  • The cluster management out-of-the-box is not so intuitive - additional SCVMM may be required (more cost, more overhead)
Guest OS support: Hyper-V

  • Hyper-V loves Windows for sure - it's only with Windows you get full feature support (Dynamic Memory for example - not even possible on supported Linux distros).
  • If the OS is not supported with Hyper-V drivers, you are limited to 1 vCPU, emulated 100Mbit NICs and 4 IDE devices which are slow.
  • For Linux (if 'supported' badge required) you are required to run SLES10/11, RHEL5/6 or CentOS 6 - basta.
  • All others are unsupported and will either need to run emulated (slow), or need to use the hv-Modules in mainline which I'd not recommend for more serious daily usage before 2.6.39/3.0. (the recently released RHEL6 modules are based on them and work).
vCPUs per VM:
  • If you need many vCPUs per VM, Hyper-V 2008 and 2008 R2 only give you a limit of 4, if you have the integration for the OS, otherwise it's only 1.
  • KVM does more, but MS plans to increase this limit - with the future Windows release.
Guest OS support: Proxmox
Proxmox on the other hand likes Linux guests. You can run Windows without virtio (paravirtualized drivers) but it won't perform as bad as an unsupported Linux on Hyper-V. If you want decent Windows performance, get the signed but not WHQL-certified drivers from Fedora, or shell out some bucks for a RHEL Desktop subscrption, where you can get virtio-win WHQL-certified.

Guest OS conclusion:
If you have a combination of paravirtualized drivers on the VMs on both, I couldn't see a significant difference in terms of guest performance.
(but haven't done benchmarking). The guest OS range on KVM goes far beyond Linux and Windows, with Hyper-V you are still a limited - no good support for BSDs or (OSS) Solaris derivatives.


Missing things and negative points - It's often easier to says what you don't like that much than what you like so here we go:

Hyper-V
- The long, much-limited non Windows guest OS (performance) and support. - and it's still limited comparing KVM.
- Backup is either a do-it-yourself solution (if not clustered) - or requires additional software (licenses), especially if you have a Cluster.
- If you want additional features for cluster management, you need SCVMM (more cost...)
- Clustering requires at least Enterprise Edition (will give you 4 Windows Guest licenses) - or pay Datacenter per Socket (unlimited Windows guests).
- Rebooting (really!): Most secirity-related updates still require reboot of the host - once a month - Proxmox only requires you to do this at Kernel update.
- Why do I need to have a GUI installed on the Host OS or a Windows machine to intuitively manage a Hyper-V node - Point goes to Proxmox with the Web-UI.
- As said: No lightweight virtualization - requires more memory in total.

Proxmox:
- Currently without Linux CLI knowledge you need to reboot the whole node for just adding a VM bridge - not necessary on Hyper-V
- Currently I cannot pull network on a virtual NIC and plug it on another bridge that might be connected on another network without rebooting it - Hyper-V can do this
- Currently I cannot unplug a disk on a VM hot (possible with Hyper-V but limited to non-bootable SCSI drives & but Windows only)
- Why are quite some default settings and scripts (backup) expecting to have the management interface on eth0 (Backup wants eth0 or vmbr0) - cluster also doesn't ask you if you want it on first interface.
- Integrated backup is quite limited - but hey: Much easier to get it up and runnign here than on Hyper-V without additional stuff
(Looking forward to see LXC container support in some day on Proxmox)

Finally: I'd say both do are doing their Job - both have quirks and limitations.
If you are more of a MS Shop and money isn't a problem, you collegues might win pro Hyper-V but with Proxmox you have more freedom more choice in terms of virtualization and I believe to see a faster pace of new features - without paying for Enterprise licenses for extra features.

If you want to jump off from Proxmox, you have a couple of other KVM-cappable distros and bare-metal OS out there, migrating away from Hyper-V needs more conversion.

I don't know exact licensing price on MS side, since I am in Edu where they give is lots of stuff at a very low price target. If you are a normal business i'd expect that paying for support hours or a subscription to the Proxmox Company could come you cheaper than Microsoft Hyper-V/SCVMM/DPMM licensing + their support.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!