Moving to LXC is a mistake!

Status
Not open for further replies.

shukko

Renowned Member
Oct 24, 2008
28
3
68
OpenVz Is always the unwanted adopted child of Proxmox from the beginning.
Proxmox developers want to get rid of it from the first day.
Now with the new 4.x series they replaced it with LXC without thinking and testing any much further.
There are tons and tons of problems with LXC. OpenVZ have it's own advantages. Although it lacks some features and somewhat old and somewhat garbage code base, it is a good container system with its own very heavily tested background and being used in millions! of vps containers.

I myself a proxmox lover from the first days. I started using it in early 1.x versions. But the move to LXC hurts me bad. Very very bad. With a complte new network system - no venet stuff - and a complete new disk system - no more flat file base storage - there is simply no means of using LXC in proxmox.

There is absolutely zero difference between LXC and KVM is production.

Think about it like that. You need to create a disk image. You need to setup and create a new bridged network system.

Only if we can use templates. Which there are already for openvz templates which we tried to fix to work with LXC.

all these apparmor stuf, all these lack of file permission stuff. LXC become completely nonsense to use as we already have very well working Qemu KVM stuff without any problems.

anybody can tell me and if please make me believe why to choose LXC over KVM?

LXC is hard to setup, hard to configure, have tons of problems and have zero advantages over KVM.


I really want and need openVZ support in Proxmox 4.x series.

Only if I was a programmer, I can by somehow fork the 4.x series to add OpenVZ supoort to it. But no, I am just a simple , noob system admin, who happens to be in charge of 50+ proxmox 3.x servers with 100's of openvz containers which I have to move to Qemu KVM in the upcoming years of my life....

Sad... So sad...

Moving to LXC is a mistake! LXC is just a candy for openVZ users of proxmox, because Proxmox developers can not be bothered with ton's of complaints of openVZ users.

Dear PROXMOX please Drop LXC all together. and only provide Qemu KVM.

Why bother with a unmature problematic system in the first place. Just check your forums. all LXC problems everywhere..
 
  • Like
Reactions: Spazmic and gkovacs
LXC provides a slightly different feature set than OpenVZ, but has better storage support. For example, you can use ZFS sub-volumes. In that case you do not need to create disk images, and you have full snapshot support. Also, Network configuration is more flexible...
IMHO configuration is simpler than OpenVZ.

And in general, containers have less overhead, specially when you run idle workloads.

Anyways, if it does not fit your needs, you can still use Qemu/KVM.
 
  • Like
Reactions: yena
I have to agree with shukko. Glitches and issues all over the place. LXC is just horrible and at this point not a really good replacement for OpenVZ

Please make OpenVZ available again (using their latest 3.10 Kernel) or turn at least Proxmox VE 3.4 into a LTS release. Debian 7 LTS is supported until May 2018.
 
Please make OpenVZ available again (using their latest 3.10 Kernel) or turn at least Proxmox VE 3.4 into a LTS release. Debian 7 LTS is supported until May 2018.

There is no OpenVZ for ANY kernel later 2.6.32. In fact, OpenVZ development is stopped and a follow up project is started. The new "OpenVZ" is currently called Virtuozzo 7 (the team behind change name and company several times, quite confusing and a bit hard to follow.).

Virtuozzo 7 is still under development and obviously not suitable for production and not feature compatible to the old OpenVZ project. more or less all new.

See also their roadmap:
https://openvz.org/Roadmap

So if you want/need stable OpenVZ, you can ONLY use 2.6.32 - available and maintained in Proxmox VE 3.x series.
 
  • Like
Reactions: postcd
Changes on this field are happening faster than we can handle. I saw these kernel problems coming over a year ago and wrote about the need to change to LXC on this forum. I'm happy to see that message got trough. I never did like the idea that unlike LXC - OpenVZ needs special modifications on kernel level. LXC is definitely the way to go. I would liked to have a better transition to do it though. Did you consider running them together and slowly phasing out OpenVZ? That would have given us much more room to maneuver with the migrations.
 
I have to agree with @shukko and @PLE

We've had to actually revert our cluster back to PVE 3.4 (and our containers to back OpenVZ) due to issues that - for now - are making PVE 4.1 and LXC not ready for production:

- There is no live (or even suspended) migration of LXC containers
- There is no snapshot backup of LXC containers on LVM, so every backup means downtime
- Number of CPUs (and some other /proc stats) are not reported correctly inside LXC containers, you see the host stats instead, making it unusable for vps hosting
- LXC has security and stability issues (MySQL is much less stable on LXC than OpenVZ)
- ZFS has stability and performance issues
- The 4.2 kernel has a data corruption bug with ZFS: https://github.com/zfsonlinux/zfs/issues/3990

I know that the development resources of Proxmox GmbH are limited, but since I don't see us upgrading to PVE 4.x until the above issues are solved, the PVE 3.4 branch will need security and stability upgrades for the foreseeable future. I encourage everyone to buy a subscription, we are planning to do it as well from this year.
 
Last edited:
I have to agree with @shukko and @PLE
- There is no live (or even suspended) migration for LXC containers
- There is no snapshot backup for LXC containers on LVM, so every backup means downtime
- Number of CPUs (and some other /proc stats) are not reported correctly inside LXC containers, you see the host stats instead, making it unusable for vps hosting
- LXC has security and stability issues (MySQL is much less stable on LXC than OpenVZ)
- ZFS has stability and performance issues
- The 4.2 kernel has a data corruption bug with ZFS: https://github.com/zfsonlinux/zfs/issues/3990

- Users can see the kernel messages from the node inside the LXC container
 
  • Like
Reactions: grin and gkovacs
... Did you consider running them together and slowly phasing out OpenVZ? ...

Yes, we support 3.x and 4.x - seems you missed that or you mean something else.?
 
I think he meant running openvz and lxc on the same node simultaneously?
 
I think he meant running openvz and lxc on the same node simultaneously?

maybe, but this is impossible as there is no usable LXC for 2.6.32 kernels.
 
I have to agree with @shukko and @PLE
- There is no live (or even suspended) migration of LXC containers

Live migration never worked properly in OpenVZ so it might as well have not existed there either.
Seeking help on OpenVZ forum was useless:
https://forum.openvz.org/index.php?t=msg&th=10595&goto=45480

A couple years ago I came to the realization that OpenVZ was going nowhere so I migrated all my containers to KVM.
I recommend you to do the same and move away from OpenVZ before you are forced to rely on abandon-ware.
 
So there will be further releases of 3.x ?

We provide security updates and kernel updates for 3.x as long as 3.x is officially supported.
 
Live migration never worked properly in OpenVZ so it might as well have not existed there either.
We've had problems with it before (migration errors mainly), but it's still miles better than the backup-restore you are forced to use with LXC guests, causing several hours of downtime for large guests.

A couple years ago I came to the realization that OpenVZ was going nowhere so I migrated all my containers to KVM. I recommend you to do the same and move away from OpenVZ before you are forced to rely on abandon-ware.
Consider yourself lucky... Unfortunately, we have many guests where the overhead caused by KVM would make them unusable.

Some of our MySQL guests simply stopped working under KVM (slowed down to a crawl) under high load, while the same workload under OpenVZ gives us 50% higher transactions per second, and much more graceful slowdown under extreme loads. We really need containers, unfortunately LXC is not there yet, so we are forced to stay with OpenVZ for the time being.
 
  • Like
Reactions: shukko
I have to agree with @shukko and @PLE

We've had to actually revert our cluster back to PVE 3.4 (and our containers to back OpenVZ) due to issues that - for now - are making PVE 4.1 and LXC not ready for production:

- There is no live (or even suspended) migration of LXC containers
- There is no snapshot backup of LXC containers on LVM, so every backup means downtime
- Number of CPUs (and some other /proc stats) are not reported correctly inside LXC containers, you see the host stats instead, making it unusable for vps hosting
- LXC has security and stability issues (MySQL is much less stable on LXC than OpenVZ)
- ZFS has stability and performance issues
- The 4.2 kernel has a data corruption bug with ZFS: https://github.com/zfsonlinux/zfs/issues/3990

I know that the development resources of Proxmox GmbH are limited, but since I don't see us upgrading to PVE 4.x until the above issues are solved, the PVE 3.4 branch will need security and stability upgrades for the foreseeable future. I encourage everyone to buy a subscription, we are planning to do it as well from this year.

I have to agree these are all issues for the current state of Proxmox and that the OpenVZ ecosystem was "better" as far as we are concerned. As for the other posters comment that live migration of OpenVZ containers never worked properly that is complete garbage. We moved around 1000's of these from one device to another and it worked great. The need for downtime with LXC is terrible, yes it is minimal but downtime is downtime!
 
Status
Not open for further replies.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!