Hypervisors comparison

dad264

New Member
Jan 29, 2013
1
0
1
Hi everyone. I am putting together a matrix comparing the available Hypervisors/IaaS that compete with VMWare. Could the community help me fill out this information for ProxMox? Some of it may not be applicable as it's a VMWare term. I have very few answers so far. (Anything else worth adding would be appreciated)
Thanks!


Max CPUs per host: 160
Max vCPUs per host:
Max vCPUs per guest:
Max RAM per host: 2TB
Max RAM per guest:
Memory Overcommit [yes/no]:
Page Sharing [yes/no]: Yes
vDisk Max Size:
Max vDisk/host:
Max vDisk/guest:
Max Active guests/host:
Guest NUMA [yes/no]:
Max Hosts/Cluster:
Max Guests/Cluster:


vNIC/host:
vNIC/guest:
VLAN Support [yes/no]: yes
vSwitch [yes/no]:
Trunk Mode to Guests [yes/no]: yes
SR-IOV [yes/no]:


Live Migration [yes/no]: Yes
Live Storage Migration [yes/no]:
Templating & Cloning [yes/no]: Yes
Dynamic disks resizing [yes/no]: Yes
Thin Disks (copy-on-write) [yes/no]:
Snapshots [yes/no]: Yes
Offload Data Transfer (ODX)
Storage Multipathing [yes/no]:
Storage Types: iSCSI/Local/NFS/FC


Console [yes/no]: Yes
API [yes/no]: Yes, REST/JSON
Guest OS support: Linux, Windows (what specifically?)
GUI [yes/no]: Yes

Host Installation (*auto/stateless) [yes/no]:
Hot ADD vCPU/v/RAM [yes/no]:
Identity Management (ldap?): LDAP/AD
 
Max vCPUs per guest: I have tested succefully 48 cores (don't have hardware to get more ;)
Max RAM per guest: I have tested with 128G
Memory Overcommit [yes/no]: yes
Max vDisk/guest: 4 ide + 14 scsi + 16 virtio
Max Active guests/host: I have production hosts with 80 guests
Guest NUMA [yes/no]: yes (not avaible through gui)
Max Hosts/Cluster: 16
Max Guests/Cluster: gui works with around 1000vm max (then it really slowdown your browser)



vNIC/guest: 32
vSwitch [yes/no]: no
Live Storage Migration [yes/no]: Coming soon
Storage Multipathing [yes/no]: yes
Hot ADD vCPU/v/RAM [yes/no]: no (planned for qemu this year, maybe end 2013)
 
the biggest comparable feature for Proxmox VS VMWare is cost per CPU for the features, VMware give a away a basic product for free but then you ahve to pay for the advanced features and if you need support you ahve to pay more. Proxmox is 100% free unless you need support - i've got a supoort contract and the proxmox team have been very helpful!
 
Max Active guests/host: I have production hosts with 80 guests
There has been an experiment on whether you can run 1000 openvz containers on a single system, and the experiment has been a success. The system showed a 97% idle cpu - try that with VMware.
Source: http://www.montanalinux.org/openvz-experiment.html#comment-29214
Hot ADD vCPU/v/RAM [yes/no]: no (planned for qemu this year, maybe end 2013)
as far as vRAM goes: KVM already supports ballooning and I have successfully tested it on a windows server 2008R2 guest. This way you can in fact hot add vRAM to the guest until you run out of physical RAM


Thin Disks (copy-on-write) [yes/no]: Yes/partly/if done manually: QCoW2 images for KVM support CoW. this technique however is not available for SAN storage if HA is a requirement. Works fine for NFS/iscsi though. If you need CoW for openvz - you need to store the openvz containers on a filesystem that has this capability, like ZFS


Guest OS support: Linux, Windows (what specifically?) See http://www.linux-kvm.org/page/Guest_Support_Status
openvz runs any x86/amd64 linux distribution that can run on a 2.6.32 kernel
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!