Moving to LXC is a mistake!

Discussion in 'Proxmox VE: Installation and configuration' started by shukko, Jan 13, 2016.

Tags:
Thread Status:
Not open for further replies.
  1. shukko

    shukko New Member

    Joined:
    Oct 24, 2008
    Messages:
    23
    Likes Received:
    3
    OpenVz Is always the unwanted adopted child of Proxmox from the beginning.
    Proxmox developers want to get rid of it from the first day.
    Now with the new 4.x series they replaced it with LXC without thinking and testing any much further.
    There are tons and tons of problems with LXC. OpenVZ have it's own advantages. Although it lacks some features and somewhat old and somewhat garbage code base, it is a good container system with its own very heavily tested background and being used in millions! of vps containers.

    I myself a proxmox lover from the first days. I started using it in early 1.x versions. But the move to LXC hurts me bad. Very very bad. With a complte new network system - no venet stuff - and a complete new disk system - no more flat file base storage - there is simply no means of using LXC in proxmox.

    There is absolutely zero difference between LXC and KVM is production.

    Think about it like that. You need to create a disk image. You need to setup and create a new bridged network system.

    Only if we can use templates. Which there are already for openvz templates which we tried to fix to work with LXC.

    all these apparmor stuf, all these lack of file permission stuff. LXC become completely nonsense to use as we already have very well working Qemu KVM stuff without any problems.

    anybody can tell me and if please make me believe why to choose LXC over KVM?

    LXC is hard to setup, hard to configure, have tons of problems and have zero advantages over KVM.


    I really want and need openVZ support in Proxmox 4.x series.

    Only if I was a programmer, I can by somehow fork the 4.x series to add OpenVZ supoort to it. But no, I am just a simple , noob system admin, who happens to be in charge of 50+ proxmox 3.x servers with 100's of openvz containers which I have to move to Qemu KVM in the upcoming years of my life....

    Sad... So sad...

    Moving to LXC is a mistake! LXC is just a candy for openVZ users of proxmox, because Proxmox developers can not be bothered with ton's of complaints of openVZ users.

    Dear PROXMOX please Drop LXC all together. and only provide Qemu KVM.

    Why bother with a unmature problematic system in the first place. Just check your forums. all LXC problems everywhere..
     
    Spazmic and gkovacs like this.
  2. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    15,940
    Likes Received:
    244
    LXC provides a slightly different feature set than OpenVZ, but has better storage support. For example, you can use ZFS sub-volumes. In that case you do not need to create disk images, and you have full snapshot support. Also, Network configuration is more flexible...
    IMHO configuration is simpler than OpenVZ.

    And in general, containers have less overhead, specially when you run idle workloads.

    Anyways, if it does not fit your needs, you can still use Qemu/KVM.
     
    yena likes this.
  3. PLE

    PLE New Member

    Joined:
    Aug 17, 2010
    Messages:
    4
    Likes Received:
    0
    I have to agree with shukko. Glitches and issues all over the place. LXC is just horrible and at this point not a really good replacement for OpenVZ

    Please make OpenVZ available again (using their latest 3.10 Kernel) or turn at least Proxmox VE 3.4 into a LTS release. Debian 7 LTS is supported until May 2018.
     
  4. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    15,940
    Likes Received:
    244
    Please report bugs instead.
     
  5. peterx

    peterx Member

    Joined:
    May 5, 2008
    Messages:
    38
    Likes Received:
    1
    Well Dietmar, look here: https://forum.proxmox.com/threads/lxc-backup-randomly-hangs-at-suspend.25345/

    This seems to be a real bug. Reported 28-12-2015 on this very list. No reaction of the Proxmox team. Very unusual I must agree. But still.
    For the rest I'm quite happy to use the Turnkey LXC containers in stead of the Openvz.
    I know the reasons to replace openvz by LXC and I agree.

    Peter
     
  6. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    12,769
    Likes Received:
    296
    There is no OpenVZ for ANY kernel later 2.6.32. In fact, OpenVZ development is stopped and a follow up project is started. The new "OpenVZ" is currently called Virtuozzo 7 (the team behind change name and company several times, quite confusing and a bit hard to follow.).

    Virtuozzo 7 is still under development and obviously not suitable for production and not feature compatible to the old OpenVZ project. more or less all new.

    See also their roadmap:
    https://openvz.org/Roadmap

    So if you want/need stable OpenVZ, you can ONLY use 2.6.32 - available and maintained in Proxmox VE 3.x series.
     
    postcd likes this.
  7. SamTzu

    SamTzu Member

    Joined:
    Mar 27, 2009
    Messages:
    346
    Likes Received:
    3
    Changes on this field are happening faster than we can handle. I saw these kernel problems coming over a year ago and wrote about the need to change to LXC on this forum. I'm happy to see that message got trough. I never did like the idea that unlike LXC - OpenVZ needs special modifications on kernel level. LXC is definitely the way to go. I would liked to have a better transition to do it though. Did you consider running them together and slowly phasing out OpenVZ? That would have given us much more room to maneuver with the migrations.
     
  8. gkovacs

    gkovacs Active Member

    Joined:
    Dec 22, 2008
    Messages:
    490
    Likes Received:
    32
    I have to agree with @shukko and @PLE

    We've had to actually revert our cluster back to PVE 3.4 (and our containers to back OpenVZ) due to issues that - for now - are making PVE 4.1 and LXC not ready for production:

    - There is no live (or even suspended) migration of LXC containers
    - There is no snapshot backup of LXC containers on LVM, so every backup means downtime
    - Number of CPUs (and some other /proc stats) are not reported correctly inside LXC containers, you see the host stats instead, making it unusable for vps hosting
    - LXC has security and stability issues (MySQL is much less stable on LXC than OpenVZ)
    - ZFS has stability and performance issues
    - The 4.2 kernel has a data corruption bug with ZFS: https://github.com/zfsonlinux/zfs/issues/3990

    I know that the development resources of Proxmox GmbH are limited, but since I don't see us upgrading to PVE 4.x until the above issues are solved, the PVE 3.4 branch will need security and stability upgrades for the foreseeable future. I encourage everyone to buy a subscription, we are planning to do it as well from this year.
     
    #8 gkovacs, Jan 14, 2016
    Last edited: Jan 14, 2016
  9. pizza

    pizza Member

    Joined:
    Nov 7, 2015
    Messages:
    44
    Likes Received:
    3
    - Users can see the kernel messages from the node inside the LXC container
     
    grin and gkovacs like this.
  10. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    12,769
    Likes Received:
    296
    Yes, we support 3.x and 4.x - seems you missed that or you mean something else.?
     
  11. wbumiller

    wbumiller Proxmox Staff Member
    Staff Member

    Joined:
    Jun 23, 2015
    Messages:
    557
    Likes Received:
    63
    I think he meant running openvz and lxc on the same node simultaneously?
     
  12. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    12,769
    Likes Received:
    296
    maybe, but this is impossible as there is no usable LXC for 2.6.32 kernels.
     
  13. e100

    e100 Active Member
    Proxmox VE Subscriber

    Joined:
    Nov 6, 2010
    Messages:
    1,227
    Likes Received:
    21
    Live migration never worked properly in OpenVZ so it might as well have not existed there either.
    Seeking help on OpenVZ forum was useless:
    https://forum.openvz.org/index.php?t=msg&th=10595&goto=45480

    A couple years ago I came to the realization that OpenVZ was going nowhere so I migrated all my containers to KVM.
    I recommend you to do the same and move away from OpenVZ before you are forced to rely on abandon-ware.
     
  14. ned

    ned Member

    Joined:
    Jan 26, 2015
    Messages:
    80
    Likes Received:
    1
    So there will be further releases of 3.x ?
     
  15. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    12,769
    Likes Received:
    296
    We provide security updates and kernel updates for 3.x as long as 3.x is officially supported.
     
  16. ned

    ned Member

    Joined:
    Jan 26, 2015
    Messages:
    80
    Likes Received:
    1
    Do we know how long 3.x will be officially supported?
     
  17. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    12,769
    Likes Received:
    296
  18. ned

    ned Member

    Joined:
    Jan 26, 2015
    Messages:
    80
    Likes Received:
    1
  19. gkovacs

    gkovacs Active Member

    Joined:
    Dec 22, 2008
    Messages:
    490
    Likes Received:
    32
    We've had problems with it before (migration errors mainly), but it's still miles better than the backup-restore you are forced to use with LXC guests, causing several hours of downtime for large guests.

    Consider yourself lucky... Unfortunately, we have many guests where the overhead caused by KVM would make them unusable.

    Some of our MySQL guests simply stopped working under KVM (slowed down to a crawl) under high load, while the same workload under OpenVZ gives us 50% higher transactions per second, and much more graceful slowdown under extreme loads. We really need containers, unfortunately LXC is not there yet, so we are forced to stay with OpenVZ for the time being.
     
    shukko likes this.
  20. lweidig

    lweidig Member

    Joined:
    Oct 20, 2011
    Messages:
    98
    Likes Received:
    2
    I have to agree these are all issues for the current state of Proxmox and that the OpenVZ ecosystem was "better" as far as we are concerned. As for the other posters comment that live migration of OpenVZ containers never worked properly that is complete garbage. We moved around 1000's of these from one device to another and it worked great. The need for downtime with LXC is terrible, yes it is minimal but downtime is downtime!
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice