Survey: Proxmox VE Kernel with or without OpenVZ?

Proxmox VE Kernel with or without OpenVZ?

  • Keep old Kernel with OpenVZ support (2.6.24)

    Votes: 143 60.3%
  • Use the latest Linux Kernel (without OpenVZ but with best KVM and hardware support)

    Votes: 94 39.7%

  • Total voters
    237
Nice! That leaves just 17 days 416 hours UTC for you guys to complete your testing. What kind of coffee do you drink in Vienna? Need some from San Francisco? :)

How long does Proxmox and the community typically keep things in testing? From the road map in the wiki, it looks like you and your team have been moving along quickly over the last 4 months.
 
Nice! That leaves just 17 days 416 hours UTC for you guys to complete your testing. What kind of coffee do you drink in Vienna? Need some from San Francisco? :)

How long does Proxmox and the community typically keep things in testing? From the road map in the wiki, it looks like you and your team have been moving along quickly over the last 4 months.

testing depends on results. as long as there are major open issues we do not mark it as stable, so lets see - you can help in testing as soon as we publish it.
 
Hmmmm, ok time to insert my newbie foot into the conv :)

First let me say that I'm coming to PVE from a different direction than most.

I actually had a been working at a company (out of work now guys, who's hiring?) where I was maintaining roughly 800 physical servers running initially Xen (3.0-3.2 hypervisors) that I was transitioning to KVM.

I went out on the web hunting for a solution to centrally manage a large number of servers/clusters of KVM machines. Let me say that PVE definitely has a great deal to offer above those who want to run a KVM only environment.

While it is true that their are several decent x based desktop apps for maintaining KVM machines, there is a lack for stable web interface based solutions. I found enomalism to be under documented and a pain to maintain, I didn't like the direction ubuntu's landscape software was going with commercialism, etc, etc, etc.

Proxmox VE does an amazing job of providing a simple, clean, working interface with a bare metal installer! Most importantly, it didn't try to lock me down to a single storage model like LVM.

So when I initially started to use PVE in the past week on my home environment I mainly looked to it in order to migrate existing KVM machines; my experience with OpenVZ was nil. However in seeing how much easier VZ would be to use in order to build small daemon hosts (DNS, RWHOIS, websites) I have to admit to liking the containers from an administrative standpoint. Hell, I had at one point in the field over 40 KVM VMs just running DNS forwarders.

However, ease of use aside, KVM appears to be the course of the future at the moment and I certainly understand the development teams need for the newest kernel, etc. What I would ask however, is why not try to use the libvirt API? Allow different Proxmox nodes to run different kernels/hypervisors (mixed in different combinations) and limit creation/migration depending on the selected destination node?

I know this is a much more involved direction, however it would mean the largest amount of hypervisor support and allow the PVE project to expand to a much wider audience.

Just my two cents, please excuse me if I've mistakenly opened past wounds/discussions.

--David
 
Hmmmm, ok time to insert my newbie foot into the conv :)

First let me say that I'm coming to PVE from a different direction than most.

I actually had a been working at a company (out of work now guys, who's hiring?) where I was maintaining roughly 800 physical servers running initially Xen (3.0-3.2 hypervisors) that I was transitioning to KVM.

I went out on the web hunting for a solution to centrally manage a large number of servers/clusters of KVM machines. Let me say that PVE definitely has a great deal to offer above those who want to run a KVM only environment.

While it is true that their are several decent x based desktop apps for maintaining KVM machines, there is a lack for stable web interface based solutions. I found enomalism to be under documented and a pain to maintain, I didn't like the direction ubuntu's landscape software was going with commercialism, etc, etc, etc.

Proxmox VE does an amazing job of providing a simple, clean, working interface with a bare metal installer! Most importantly, it didn't try to lock me down to a single storage model like LVM.

So when I initially started to use PVE in the past week on my home environment I mainly looked to it in order to migrate existing KVM machines; my experience with OpenVZ was nil. However in seeing how much easier VZ would be to use in order to build small daemon hosts (DNS, RWHOIS, websites) I have to admit to liking the containers from an administrative standpoint. Hell, I had at one point in the field over 40 KVM VMs just running DNS forwarders.

However, ease of use aside, KVM appears to be the course of the future at the moment and I certainly understand the development teams need for the newest kernel, etc. What I would ask however, is why not try to use the libvirt API? Allow different Proxmox nodes to run different kernels/hypervisors (mixed in different combinations) and limit creation/migration depending on the selected destination node?

I know this is a much more involved direction, however it would mean the largest amount of hypervisor support and allow the PVE project to expand to a much wider audience.

Just my two cents, please excuse me if I've mistakenly opened past wounds/discussions.

--David

libvirt is written in C and contains large parts we will never use. With perl, we can do the same in ten times less code and time. Therefore development is MUCH faster without libvirt.
 
  • Like
Reactions: 1 person
Hey tom,

Last thing I'm suggesting is replacing your entire underline interface with libvirt. You'll never retain the same direct VM control/feel. However, adding basic libvirt support for create, destroy, modify, start, stop will allow you to add a large number of additional machine hypervisors you don't have. Keep direct code for KVM, OpenVZ, but use libvirt for alot of others you don't want to individually track / maintain. Then it just becomes a kernel / package selection issue for each bare metal.

Ultimately the problem isn't just one of which environment is best. Its that people naturally attach to something they learn to trust and talking them into something else is just not going to happen quickly. I know people using usermode linux or some using Vmware ESX that you'll never convince to move. I'd rather see a project that fills the management console void between them all.

Either way, thanks for a great product so far!!!!

--David
 
Hey tom,

Last thing I'm suggesting is replacing your entire underline interface with libvirt. You'll never retain the same direct VM control/feel. However, adding basic libvirt support for create, destroy, modify, start, stop will allow you to add a large number of additional machine hypervisors you don't have. Keep direct code for KVM, OpenVZ, but use libvirt for alot of others you don't want to individually track / maintain. Then it just becomes a kernel / package selection issue for each bare metal.

Ultimately the problem isn't just one of which environment is best. Its that people naturally attach to something they learn to trust and talking them into something else is just not going to happen quickly. I know people using usermode linux or some using Vmware ESX that you'll never convince to move. I'd rather see a project that fills the management console void between them all.

Either way, thanks for a great product so far!!!!

--David

you missed the main goal of Proxmox VE - we provide an integrated solution. not just a very small part like a web interface or a kernel.

99 % of our work is to make sure that the selected components fits together and the end user can have a running system in a few minutes.

there are already a lot of quite nice designed web interfaces around for managing some parts but this is a completely different approach and this not what you want.

the market leader provides a similar environment (closed source) and that's one reason why he is the market leader. if you want to use libvirt, there are also already solutions around, just think of redhat, the libvirt maintainer.
 
Hi all!

Proxmox VE uses currently a 2.6.24 based Kernel. Due to the limitations of OpenVZ there is no actual Kernel possible (OpenVZ 2.6.26/27 are quite similar and also quite old).

So the question is, should we go for the latest Kernel to get the latest and greatest KVM functionality and best hardware support?

What do you think? Please vote!

We have now 3 kernel branches - so all usage scenarios are covered.

See http://www.proxmox.com/forum/showthread.php?t=2853
 
From my point of view it would be the best to have one version for VZ users and one for only KVM users.
 
KVM support is most important for my customers. Openvz er low priority. The solution about making it possible when the server is installed to specify what kind of server you want, KVM/OPENVZ or both is a good solution.
 
I use both KVM and openVZ and chosing between one sytem would kill the interest in proxmox.

However kernel 2.6.24 isn't sufficient for my use. My servers have bnx2 network cards and support in 2.6.24 is bloated (kernel panic every 2 days).

These problems where resolved in 2.6.27 and thus I would greatly appreciate an openvz capable kernel (openvz did release a 2.6.27 kernel on december the 25th )

Mixed kernel is not an option (2.6.24 on some servers and 2.6.32 on the others ) as I use ocfs2 too which version (1.4 in 2.6.24 and 1.5 in 2.6.32) are not compatible.

thanks,
François
 
I use both KVM and openVZ and chosing between one sytem would kill the interest in proxmox.

However kernel 2.6.24 isn't sufficient for my use. My servers have bnx2 network cards and support in 2.6.24 is bloated (kernel panic every 2 days).

These problems where resolved in 2.6.27 and thus I would greatly appreciate an openvz capable kernel (openvz did release a 2.6.27 kernel on december the 25th )

Mixed kernel is not an option (2.6.24 on some servers and 2.6.32 on the others ) as I use ocfs2 too which version (1.4 in 2.6.24 and 1.5 in 2.6.32) are not compatible.

thanks,
François

If you need OpenVZ and KVM, go for proxmox-ve-2.6.18 kernel (currently in pvetest). whats the issue with ocfs2?
 
The issue with 2.6.18 and 2.6.24 is the support of the bnx2 nic which make the server hang with kernel panic from time to time.

This issue was resolved in 2.6.27.


The issue with ocfs2 is that each node in an ocfs2 cluster must have the same capability/version and thus the suggestion of having two kernel, one dedicated for kvm, another for openvz doesn't work in this case.
 
The issue with 2.6.18 and 2.6.24 is the support of the bnx2 nic which make the server hang with kernel panic from time to time.

This issue was resolved in 2.6.27.


The issue with ocfs2 is that each node in an ocfs2 cluster must have the same capability/version and thus the suggestion of having two kernel, one dedicated for kvm, another for openvz doesn't work in this case.

did you test our 2.6.18? and this kernel also works quite well with KVM, so I do not suggest two kernels, I suggest one - 2.6.18. FYI, our 2.6.18 uses the stable OpenVZ and KVM works quite well (a lot of backports).
 
The 2.6.18 kernel does not support ocfs2.

uranus:~# uname -a
Linux uranus 2.6.18-1-pve #1 SMP Mon Dec 21 10:03:07 CET 2009 x86_64 GNU/Linux
uranus:~# gunzip -c /proc/config.gz |grep OCFS2
# CONFIG_OCFS2_FS is not set
 
Great to hear!

Has the the KVM "PCI pass-through" feature been back-ported to 2.6.18 or may be 2.6.24 yet ?

It seemed to be available in Red Hat 5 now:
http://rhn.redhat.com/errata/RHEA-2009-1269.html
(the last item on the list of enhancements on that page)

Thanks for your help.

we are currently testing pci-passthrough for the 2.6.18 (and 2.6.32) kernel - its expected to work.
 
we are currently testing pci-passthrough for the 2.6.18 (and 2.6.32) kernel - its expected to work.

Thanks for your quick reply.

I do not see testing on 2.6.24 there, is it safe for me to assume that going forward we will see 2.6.18 (for openvz and kvm) and 2.6.32 (for kvm) replacing 2.6.24 altogether? So we only have 2 kernels for Proxmox left?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!