Proxmox + OpenVZ vs Citrix XenServer

Jan 26, 2011
82
2
6
I build and destroy my ProxMox cluster very often. In fact, after each new ProxMox release. The only limitation I have seen is that all VMIDs on every hardware node must be unique. Every time I could successfully connect back my cluster again after ProxMox version upgrade, regardless of the VMs placement: either on master or slave.

I am very interested in your experience of using the Citrix Xen server. Could you start another topic, something like "Citrix Xen server in comparison with ProxMox"? In my humble opinion based on my personal experience, sometimes you will see that OpenVZ is not a gift at all. It can be a real plague in several situations.

This is a reply to SuSt from this post: (http://forum.proxmox.com/threads/5913-Create-cluster-with-existing-OpenVZ-CT-on-node)

I started looking at OpenVZ after a friend shared his experiences of running his business under OpenVZ for quite some time now and has been quite happy with it. So after a little research, I decided to make the switch. My main reasons for switching from Citrix XenServer to Proxmox + OpenVZ are:

+ Easier management, including direct access to the guest file system. While Citrix uses a proprietary format, which requires you to take the VM offline and go through a multi-step process to mount the guest VM's filesystem, OpenVZ guest file systems are fully available from the host node (even when the VM is running).

+ Live migration without having to use shared storage. With OpenVZ you get the best of both worlds... fast local storage, but still the ability to do live migrations.

+ Super simple clustering with Proxmox. XenServer allows you to create pools, but there is almost no benefit in doing so if you're not using shared storage. Not only that, but I recall when trying to join or remove a node (I can't remember which) under XenServer it warned me that doing so would wipe out all VMs on the host.

+ XenServer is not open source, and unless your budget can handle $1000 to $5000 per license, there will always be missing features. With Proxmox you benefit from all the development of all the other open source projects that Proxmox benefits from. Granted a lot of those only benefit VMs running under KVM, which I may consider in the future if I have issues with OpenVZ.

+ More flexible resource management. With XenServer, you can not pool memory or disk space between VMs.

+ Proxmox's web interface does not require a windows client. Also XenServer's XenCenter window's client has been very slow to load for me (perhaps because I have lots of servers that are not pooled together.

+ Better guest performance. I have not done my own benchmarks, but have run into plenty of other sites that make this claim for OpenVZ. It makes sense that OpenVZ would run faster, since there's no virtualization layer with OpenVZ containers.

+ Better support. I've run into various issues with XenServer and posted about them on the Citrix forums and there's been zero response from Citrix staff. For example: (http://forums.citrix.com/thread.jspa?threadID=280282&tstart=0) Or the read-only file system issue: [http://forums.citrix.com/message.jspa?messageID=1506321]. Compare those threads with what you see on the Proxmox forum. That being said, I wish there was a paid support option for ProxMox that was less than nearly $300 USD per ticket. But, since they seem to answer most questions for free, you really can't complain.

SuSt, ok, now I've shared why I'm leaving XenServer... I'd be interested to know what problems you've seen with OpenVZ, so I know what to watch out for. ;)

Thanks,

Curtis
 
OK.

I am running OpenVZ for about two years on production, so I can share my experience if you'd like to.

First of all let's tell few words about the essence of this technology.

As a matter of fact, OpenVZ corresponds a very complex set of kernel patches plus a user-space utility called "vzctl" that manages almost all the features of modificated kernel. I have said "very complex", but what does it mean? Please, see this comment, posted by one of chief OpenVZ developers.
I saw this bug many times. And fixed it many times too. This problem (especially with hanging "init" task) can be caused by many different reasons and bugs.
Don't you find it strange, that developer many times does fix the same bug?
So, as a result of such complexity, even stable OpenVZ-modificated kernel inevitably contains a great deal of bugs.

Let's go on. Initially the OpenVZ patch-set is being developed for RHEL (Red Hat enterprise Linux) and then ported to other distributions, such as Debian and ProxMox (that is based on Debian). Debian maintainers apply their own patches to kernel, so in output we have a hell mishmash of modifications, sometimes being incompatible with each other.

But it's not the finish of the story yet. All the kernel modules, such as file system support, iptables, network support, etc. must be modificated too, otherwise they won't support virtualization. Yes, they are. But not all and not perfectly. For example, you can't mount a remote samba/cifs file share "from inside" the container, because cifs kernel module still remains to be unsupported by OpenVZ.

The need of developing new set of patches every time as a new (even minor) version of vanilla Linux kernel is released leads to a great lag between vanilla and OpenVZ kernel releases. So, if you buy a modern server with up-to-date hardware inside, you can't be sure that you will be able to run OpenVZ on it. I have already met such a problem.

The next aspect of the matter. If you try to virtualize some simple tasks like Apache or MySQL, you will have no troubles. But certain software requires some advanced tuning concerning process, memory or network management. For example I mean such applications as Asterisk, Redis or OpenVPN. And then shaman dances with tambourine to be started to make this work. Most of issues are known, solutions exist, but they are not trivial at all.

The next headache is resource management and security. OpenVZ has its own mechanisms to distribute RAM and CPU among VEs and OOM killer implementation. These methods are based on so-called "beancounters" and they are not perfect too. In some cases a potential malefactor can run a fork bomb ore something like this on a VE, and functionality of all other VEs on the same hardware node will be violated.

Also, please don't forget that OpenVZ is one common kernel serving all VEs running on the box. A kernel panic or a system fault means crash of the whole bundle of environments. And it's not something uncommon. For example, see the bug I have already mentioned. By the way, take notice how much time has it took this bug to be fixed. The bug was reported as far as 2010-09-01, and ProxMox team released corrected kernel only few days ago, almost half a year later. But it's not the only possible cause of crash. Sometimes everything may fail to kernel panic after an attempt to change some VE settings "on the fly", without container shutdown.

Some of Linux distributives may be incompatible wtih OpenVZ in principle. I don't follow news from this battlefront, but I remember that ArchLinux had troubles with OpenVZ virtualization.

So, summing up aforesaid. OpenVZ is really VERY fast and convenient, both in operation and maintenance. But also it is extremely buggy, lagging features behind vanilla kernel and has strong limitations without a simple workaround in several cases.
 
Thanks for all your insights, Sust. We are running simple apache/php/mysql setups, usually just one container per node. So, based on what you're saying here, it sounds like it will be stable. I will avoid doing anything very fancy on them... if we need something really special, then KVM is probably where we will look next.

And while 6 months does sound like a long time to wait for a fix to a problem, at least it looks like issues are eventually looked at. You will note that the two issues I had with XenServer still have not drawn any response from Citrix at all. Perhaps they would respond if I were a paying customer, I'm not sure. So, if you compare the support given for the free versions of both, Proxmox definitely wins.

Curtis
 
it sounds like it will be stable.
Yes, you have understood me correctly. Just Apache with MySQL in conjunction with "one container per node" will never cause any difficulties.

But one more thing you should know. As soon as you start to use any external resources relative to your container: "mount -t nfs", "mount --bind" (from host node into container) or external network devices (NETDEV=blah-blah-blah), you may say goodbye to live migration. I warned. :)
 
just a few comments regarding OpenVZ in general and OpenVZ for 2.6.32 in particular. 2.6.32 this is still a development branch and no one claimed this is stable. the only stable OpenVZ implementation is 2.6.18, 2.6.32 is expected to be the next stable.
see http://pve.proxmox.com/wiki/Proxmox_VE_Kernel

so if you expect a stable feature set you should go for the stable branch.

Proxmox VE uses different sources for kernels - we try always to choose the best available. This means, some Kernels are Debian based, some are Ubuntu based and also some are RHEL based (e.g. 2.6.18, stable OpenVZ)
 
the good thing about ProxMox is that you always have KVM as a bakcup, so if your asterisk needs fancy kernel modules, you can run just it inside KVM, while still having the same User Interface. For example, I've set up a BigBlueButton server, it uses freeswitch as the voice back-end. I thought it would also require kernel modules to work like asterisk does, but it works really smooth as an OpenVZ guest without any kernel modules.

shaman dances with tambourine
Впервые вижу это словосочетание на английском :D
 
I know MongoDB has serious problems running on OpenVZ.

Anyone with LXC experience? They have just released a new version.
 
just a few comments regarding OpenVZ in general and OpenVZ for 2.6.32 in particular. 2.6.32 this is still a development branch and no one claimed this is stable. the only stable OpenVZ implementation is 2.6.18, 2.6.32 is expected to be the next stable.
see http://pve.proxmox.com/wiki/Proxmox_VE_Kernel

so if you expect a stable feature set you should go for the stable branch.

Proxmox VE uses different sources for kernels - we try always to choose the best available. This means, some Kernels are Debian based, some are Ubuntu based and also some are RHEL based (e.g. 2.6.18, stable OpenVZ)

Thanks for the warning, Tom. According to this page: http://pve.proxmox.com/wiki/Proxmox_VE_Kernel -- "Beginning with 1.6 the 2.6.32 is the default and recommended branch."

...which I suppose I incorrectly assumed meant it was "stable", and so now I have 2.6.32 running on a few machines.

That being said, if I run into problems, is a downgrade to 2.6.18 possible, or would I need to do a full re-install of proxmox on the host node?

Curtis
 
Thanks for the warning, Tom. According to this page: http://pve.proxmox.com/wiki/Proxmox_VE_Kernel -- "Beginning with 1.6 the 2.6.32 is the default and recommended branch."

...which I suppose I incorrectly assumed meant it was "stable", and so now I have 2.6.32 running on a few machines.
there is/was a lot of discussion about this and the majority voted for this way. but the wiki page tells you exactly that OpenVZ is the development branch on 2.6.32. And yes, for most of our users its the right selection. but for OpenVZ its better to go for the stable branch. Sorry for the confusion but things are not always that simple. But we try to make everybody happy with the cost that users have to think which kernel is best suited for their needs.

That being said, if I run into problems, is a downgrade to 2.6.18 possible, or would I need to do a full re-install of proxmox on the host node?

Curtis

just follow the wiki page, means just do a aptitude install proxmox-ve-2.6.18 - can´t be easier.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!