This is a Proxmox forum, so I think it's biased anyway ;-)
I've been using Hyper-V (longer) due to decision that was taken there when we virtualized and Proxmox (more recently) on another workplace.
I hope I can share some experiences here - they may not apply to your environment...
Clustering (not much experience, but some facts)
- Hyper-V with Live Migration also requires clustering like Proxmox but with a significant difference: It requires a shared storage
-> In other words: a SAN with FC(oE)/ iSCSI or something like Infiniband.
- In general shared storage makes sense in bigger environments but beware: You need to get everything set up, have the personnel and knowledge how to configure and maintain it correctly (LUN mapping, multipathing etc...). If you go FC for high bandwidht/low latency: Be prepared to shell out a lot of money on targets, HBAs and respective Switches.
- You can use shared storage for clustering with Proxmox too - but you are not required initially since you can use DRBD.
(Investing in a 10G NIC for DRBD replication could be a gould invest)
- Setting up a Hyper-V Clusters requires you to join the Cluster to a Windows Domain (have the DCs on the cluster, and you bite yourself in the tail) - no need for Windows Domain overhead on Proxmox
- The cluster management out-of-the-box is not so intuitive - additional SCVMM may be required (more cost, more overhead)
Guest OS support: Hyper-V
- Hyper-V loves Windows for sure - it's only with Windows you get full feature support (Dynamic Memory for example - not even possible on supported Linux distros).
- If the OS is not supported with Hyper-V drivers, you are limited to 1 vCPU, emulated 100Mbit NICs and 4 IDE devices which are slow.
- For Linux (if 'supported' badge required) you are required to run SLES10/11, RHEL5/6 or CentOS 6 - basta.
- All others are unsupported and will either need to run emulated (slow), or need to use the hv-Modules in mainline which I'd not recommend for more serious daily usage before 2.6.39/3.0. (the recently released RHEL6 modules are based on them and work).
vCPUs per VM:
- If you need many vCPUs per VM, Hyper-V 2008 and 2008 R2 only give you a limit of 4, if you have the integration for the OS, otherwise it's only 1.
- KVM does more, but MS plans to increase this limit - with the future Windows release.
Guest OS support: Proxmox
Proxmox on the other hand likes Linux guests. You can run Windows without virtio (paravirtualized drivers) but it won't perform as bad as an unsupported Linux on Hyper-V. If you want decent Windows performance, get the signed but not WHQL-certified drivers from Fedora, or shell out some bucks for a RHEL Desktop subscrption, where you can get virtio-win WHQL-certified.
Guest OS conclusion:
If you have a combination of paravirtualized drivers on the VMs on both, I couldn't see a significant difference in terms of guest performance.
(but haven't done benchmarking). The guest OS range on KVM goes far beyond Linux and Windows, with Hyper-V you are still a limited - no good support for BSDs or (OSS) Solaris derivatives.
Missing things and negative points - It's often easier to says what you don't like that much than what you like so here we go:
Hyper-V
- The long, much-limited non Windows guest OS (performance) and support. - and it's still limited comparing KVM.
- Backup is either a do-it-yourself solution (if not clustered) - or requires additional software (licenses), especially if you have a Cluster.
- If you want additional features for cluster management, you need SCVMM (more cost...)
- Clustering requires at least Enterprise Edition (will give you 4 Windows Guest licenses) - or pay Datacenter per Socket (unlimited Windows guests).
- Rebooting (really!): Most secirity-related updates still require reboot of the host - once a month - Proxmox only requires you to do this at Kernel update.
- Why do I need to have a GUI installed on the Host OS or a Windows machine to intuitively manage a Hyper-V node - Point goes to Proxmox with the Web-UI.
- As said: No lightweight virtualization - requires more memory in total.
Proxmox:
- Currently without Linux CLI knowledge you need to reboot the whole node for just adding a VM bridge - not necessary on Hyper-V
- Currently I cannot pull network on a virtual NIC and plug it on another bridge that might be connected on another network without rebooting it - Hyper-V can do this
- Currently I cannot unplug a disk on a VM hot (possible with Hyper-V but limited to non-bootable SCSI drives & but Windows only)
- Why are quite some default settings and scripts (backup) expecting to have the management interface on eth0 (Backup wants eth0 or vmbr0) - cluster also doesn't ask you if you want it on first interface.
- Integrated backup is quite limited - but hey: Much easier to get it up and runnign here than on Hyper-V without additional stuff
(Looking forward to see LXC container support in some day on Proxmox)
Finally: I'd say both do are doing their Job - both have quirks and limitations.
If you are more of a MS Shop and money isn't a problem, you collegues might win pro Hyper-V but with Proxmox you have more freedom more choice in terms of virtualization and I believe to see a faster pace of new features - without paying for Enterprise licenses for extra features.
If you want to jump off from Proxmox, you have a couple of other KVM-cappable distros and bare-metal OS out there, migrating away from Hyper-V needs more conversion.
I don't know exact licensing price on MS side, since I am in Edu where they give is lots of stuff at a very low price target. If you are a normal business i'd expect that paying for support hours or a subscription to the Proxmox Company could come you cheaper than Microsoft Hyper-V/SCVMM/DPMM licensing + their support.