Is anyone here actually running kubernetes on their LXC containers?

windowsrefund

New Member
Sep 30, 2021
1
1
1
43
I know I've been able to get k3s running on my LXC containers in the past. This was when I just ran a straight up OS and setup LXD myself.

Then I decided to try and be clever, wiped my box, and installed Proxmox in order to, among other things, benefit from the support for LXC containers.

Between the well documented problems with /dev/kmsg (mistakenly marked as resolved in this forum) and other issues involving cgroups, I just want to know if anyone here is actually running kubernetes on their containers? I have a sneaking suspicion everyone just bailed on this approach and just went to the VM side of things. Please prove me wrong :)
 
  • Like
Reactions: LEI
1638181388532.png

In the container:

modprob: FATAL: Module br_netfilter not found in directory /lib/modules/5.13.19-1-pve
modprob: FATAL: Module overlay not found in directory /lib/modules/5.13.19-1-pve


1638181433009.png

1638181570859.png


But on my host the overlay module is loaded:

1638181665205.png



Any idea?
 
I followed the same guide and it works.

My K3s does however throw those errors as well, seems to be false positive.
proxmox 6.4 latest, ubuntu 20.04 lxc, latest k3s stable
 
Hi,

I’m running a k3s cluster of 3 nodes and 1 master within privileged LXC containers.

All of them are hosted into a Proxmox 7.1 cluster of 3 machines with a complex SDN topology.

So it’s feasible, but really painful when you starts from scratch.

You have allot of tips and tricks to put in place in order to make sure Container-d will work seamlessly, additional modules setup for Kubernetes networking…

My initial setup was the following.
  • Host machine:
    • Debian Bullseyes based (Proxmox VE 7.0)
    • LXC v4.0.9-4
    • Additional Kernel Modules loaded at startup (/etc/modules):
br_netfilter
overlay
aufs
ip_vs
nf_nat
xt_conntrack

  • LXC Guest Machine:
    • Pure Debian Bullseyes
    • LXC extra settings:
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow:
lxc.cgroup.devices.deny:
lxc.cap.drop:
lxc.mount.auto: "proc:rw sys:rw cgroup:rw:force"
 
  • Like
Reactions: semanticbeeng
From my experiences the major drawback using LXC containers for Kubernetes is the fact you will not be able to implement certain persistent storage provider such as :
- OpenEBS
- Kasten
- Longhorn
- Rook
Mostly because it requires to access low level kernel apis and I already spent too much time on tries.

From my side I went to Kadalu which is an open-source encapsulated GlusterFS architecture within Kubernetes.
It’s a young project but quite promising.

So, if you are looking for simplicity, don’t go to LXC, instead use a pre made bundle.
The resource overhead of a full virtualization is negligible.
 
I tried , it worked (cluster worked) but i recerted to vms.
- too much permissions needed, basically giving full host access to k8s
- issues with snapshotter in combination with zfs backed storage - endlessly eating storage space for no reason
- i was expecting issues with longhorn, as explained above
- too many workarounds needed to get it working. Hard to automate using terraform. I want a seamless experience.
 
  • Like
Reactions: wbedard
This Thread need an UP because it really doesn't make sense that we are still in 2023 and LXC (The base of the containarization that all the docker garbage is based on) is still not fisable along side Kubernetes.
Also Having proxmox and Ceph is a must not only by the reliability but also because of the architecture that makes it much simpler to manage DataCenter Operations
I know that many here use Proxmox as home lab but i have a cluster working on the Public Health System for Brazil for 6 years non stop.
Joining Kubernetes to this architecture using VMs feels to me like Renting a Train to carry and airplane to delivery a single Mail to a city 5km away. Really cutting the chicken with a chainsaw.

Please if any of you managed to evolve this manner please come to me. I'll appretiate and im sure many brazilian citizens also!
 
This Thread need an UP because it really doesn't make sense that we are still in 2023 and LXC (The base of the containarization that all the docker garbage is based on) is still not fisable along side Kubernetes.
Also Having proxmox and Ceph is a must not only by the reliability but also because of the architecture that makes it much simpler to manage DataCenter Operations
I know that many here use Proxmox as home lab but i have a cluster working on the Public Health System for Brazil for 6 years non stop.
Joining Kubernetes to this architecture using VMs feels to me like Renting a Train to carry and airplane to delivery a single Mail to a city 5km away. Really cutting the chicken with a chainsaw.

Please if any of you managed to evolve this manner please come to me. I'll appretiate and im sure many brazilian citizens also!
From my point of view LXC is not production ready as host for Kubernetes.

After many years of attempts, finding real business cases then each time allot of issues shows up:
- Security
- Shared Kernel with the Hypervisor
- Automation
- Maintenance
- Upgrade
- Backups / Snapshots
Finally, qemu/KVM is much more suitable from my point of view.

BTW, all my previous infra under LXC have been migrated into KVMs, no more workarounds and clean hypervisors.

You are talking about storage architecture, all depends of your needs, if your plan is to have an hyperconverged infra such as :
a. VM on demand or virtual appliances which doesn't requires high IOPs, it could make sense to use CEPH as backend storage.
We could debate on CEPH, other technologies exists, but of course not implemented by Proxmox by default.
b. Kubernetes cluster(-s) (K3s, K8s, REHL OpenShift/OKD, ...), ZFS as backend storage seems more performant as soon as you choose your own suitable persistent storage provider.

At the end, everyone can say this is the best or that is worst than the other...
It depends all the time about your hardware, network topology, infrastructure, software architecture and of course the budget you have.

If you are not satisfied with a product, help the community or move away.
You have plenty of choices on the market, VMWare ESXi/vSphere, Microsoft Hyper-V/Azure, OpenStack, Citrix, XCP-ng / Xen,...
 
Hello, this is a great thread and helped me get k3s running on unprivileged LXC containers with saltstack. I further complicated the install by following the CIS hardening guidelines and HA etcd guides. I am still working on it at the moment.

tabnul's comments about too many permissions and it being difficult and broken are defiantly correct!

I am mostly worried about the security. Containerd and LXC, with a lot of access, seems it would be secure enough. Though it seems most cloud offerings are running on VMs.

What I want to know is if the performance and security of the LXC container with containerd is better than a full VM?

I tried , it worked (cluster worked) but i recerted to vms.
- too much permissions needed, basically giving full host access to k8s
- issues with snapshotter in combination with zfs backed storage - endlessly eating storage space for no reason
- i was expecting issues with longhorn, as explained above
- too many workarounds needed to get it working. Hard to automate using terraform. I want a seamless experience.

P.S. At this point with how painful it is to configure LXC containers, VMs might be better with the worst performance.

Short version of the server config.yaml I am using. I removed the CIS recommended options to keep it short.
YAML:
protect-kernel-defaults: false
kube-apiserver-arg:
  - 'feature-gates=KubeletInUserNamespace=true'
kube-controller-manager-arg:
  - 'feature-gates=KubeletInUserNamespace=true'
kubelet-arg:
  - 'feature-gates=KubeletInUserNamespace=true'
snapshotter: 'fuse-overlayfs'
 
This Thread need an UP because it really doesn't make sense that we are still in 2023 and LXC (The base of the containarization that all the docker garbage is based on) is still not fisable along side Kubernetes.
Also Having proxmox and Ceph is a must not only by the reliability but also because of the architecture that makes it much simpler to manage DataCenter Operations
I know that many here use Proxmox as home lab but i have a cluster working on the Public Health System for Brazil for 6 years non stop.
Joining Kubernetes to this architecture using VMs feels to me like Renting a Train to carry and airplane to delivery a single Mail to a city 5km away. Really cutting the chicken with a chainsaw.

Please if any of you managed to evolve this manner please come to me. I'll appretiate and im sure many Brazilian citizens also!
I was looking for the same but seems lxc is still far behind to be prepared for supporting Kubernetes. Sad. Anyway, in my case Im using terraform to deploy the vms, working very well and will setup ansible to make the final kubernetes backup. will use ansible as well to apply all the needed CIS benchmarks.
 
A few months ago, I learned about Kata Containers, and those look to me like a good fit for Proxmox as an option to support application containers but keep them as infrastructure.
Please review their website (https://katacontainers.io/)and check if my consideration is valid. The Kata Container project is part of the Open Infrastructure Foundation (https://openinfra.dev/projects/)
I want to bring the idea here as the initial place, and if it has some traction, I will probably move properly to open a feature request. Also, we can start a new thread if it seems better. I looked on the forum to see if someone mentions Kata Containers anywhere else but found no reference.

kata-explained1@2x.png
 
  • Like
Reactions: esi_y and navigator
A few months ago, I learned about Kata Containers, and those look to me like a good fit for Proxmox as an option to support application containers but keep them as infrastructure.
Please review their website (https://katacontainers.io/)and check if my consideration is valid. The Kata Container project is part of the Open Infrastructure Foundation (https://openinfra.dev/projects/)
I want to bring the idea here as the initial place, and if it has some traction, I will probably move properly to open a feature request. Also, we can start a new thread if it seems better. I looked on the forum to see if someone mentions Kata Containers anywhere else but found no reference.

View attachment 63285
It seems as a nice idea but:

First of all PVE is already well integrated with LXC

It has been a sucessfull implementation and we have TKL support for it

I don't know if the effort is worthit for this huge integration, comprehend that to request a feature in proxmox that supports KATA we have to change the entire architecture of VMs and LXC Containers.

Is a well known project altought to inmature 2017 is only 7 Years as PVE already has 18yr like LXC. I think is a thumbs up but with caution.
 
It seems as a nice idea but:

First of all PVE is already well integrated with LXC

Not so well, maybe. Also, there was a time when OpenVZ was a thing (that time I did not even touch PVE because LXD was started at the same time I was looking. Sadly both platforms tried to cover both VM & LXC world and it's a mess.

It has been a sucessfull implementation and we have TKL support for it

I don't know if the effort is worthit for this huge integration, comprehend that to request a feature in proxmox that supports KATA we have to change the entire architecture of VMs and LXC Containers.

I don't understand the "effort" argument, but I noticed PVE is more worried about risking touching anything "too novel" until there's no further options (rgmanager comes to mind).

Is a well known project altought to inmature 2017 is only 7 Years as PVE already has 18yr like LXC. I think is a thumbs up but with caution.

Why should it be one or the other?
 
Well for example, if we compare to OpenShift or OpenStack they are huge projects kind a working on a different approach to the same problem PVE is.

In my point of view some things like Multi Cluster integration and HA between Geo locations can be done with K8S for example, but then everything comes as a performance problem with VMs and etc. Then why to even use PVE? Why not just a Pure K8S batemetal?

BTW if you check the latest feature request in this regard (multi geo location) is something requested 5 to 6 years ago. It has been a long time ago And people are still working in a beta release

This is what was missing to implement K8S with distributed ETCd. The problem is that although docker can run inside an lxc container and k8s can deploy docker pods the performance issues are more a priority now a days. I don't know if have follow the discussions here but many things are being done in rust. This is a Plus for Kata Containers.

So I don't know, as I said before is only my opinion and my point of view and only one point of view.

My intent with this post and replay is to stimulate the discussion in the community for this, Maybe open a new Thread and direct the efforts on listing pros and cons on the KataContainers?

The community is already mature enough to have this type of discussions and approaches. I'm glad you replied you listed some good and straight forward points in defense. Can you open a new thread about it an list pros and cons?

Also are you willing to work on code for this to be made, if this is a decision to the entire PVE community?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!