Is anyone here actually running kubernetes on their LXC containers?

windowsrefund

New Member
Sep 30, 2021
1
1
1
42
I know I've been able to get k3s running on my LXC containers in the past. This was when I just ran a straight up OS and setup LXD myself.

Then I decided to try and be clever, wiped my box, and installed Proxmox in order to, among other things, benefit from the support for LXC containers.

Between the well documented problems with /dev/kmsg (mistakenly marked as resolved in this forum) and other issues involving cgroups, I just want to know if anyone here is actually running kubernetes on their containers? I have a sneaking suspicion everyone just bailed on this approach and just went to the VM side of things. Please prove me wrong :)
 
  • Like
Reactions: LEI
1638181388532.png

In the container:

modprob: FATAL: Module br_netfilter not found in directory /lib/modules/5.13.19-1-pve
modprob: FATAL: Module overlay not found in directory /lib/modules/5.13.19-1-pve


1638181433009.png

1638181570859.png


But on my host the overlay module is loaded:

1638181665205.png



Any idea?
 
I followed the same guide and it works.

My K3s does however throw those errors as well, seems to be false positive.
proxmox 6.4 latest, ubuntu 20.04 lxc, latest k3s stable
 
Hi,

I’m running a k3s cluster of 3 nodes and 1 master within privileged LXC containers.

All of them are hosted into a Proxmox 7.1 cluster of 3 machines with a complex SDN topology.

So it’s feasible, but really painful when you starts from scratch.

You have allot of tips and tricks to put in place in order to make sure Container-d will work seamlessly, additional modules setup for Kubernetes networking…

My initial setup was the following.
  • Host machine:
    • Debian Bullseyes based (Proxmox VE 7.0)
    • LXC v4.0.9-4
    • Additional Kernel Modules loaded at startup (/etc/modules):
br_netfilter
overlay
aufs
ip_vs
nf_nat
xt_conntrack

  • LXC Guest Machine:
    • Pure Debian Bullseyes
    • LXC extra settings:
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow:
lxc.cgroup.devices.deny:
lxc.cap.drop:
lxc.mount.auto: "proc:rw sys:rw cgroup:rw:force"
 
From my experiences the major drawback using LXC containers for Kubernetes is the fact you will not be able to implement certain persistent storage provider such as :
- OpenEBS
- Kasten
- Longhorn
- Rook
Mostly because it requires to access low level kernel apis and I already spent too much time on tries.

From my side I went to Kadalu which is an open-source encapsulated GlusterFS architecture within Kubernetes.
It’s a young project but quite promising.

So, if you are looking for simplicity, don’t go to LXC, instead use a pre made bundle.
The resource overhead of a full virtualization is negligible.
 
I tried , it worked (cluster worked) but i recerted to vms.
- too much permissions needed, basically giving full host access to k8s
- issues with snapshotter in combination with zfs backed storage - endlessly eating storage space for no reason
- i was expecting issues with longhorn, as explained above
- too many workarounds needed to get it working. Hard to automate using terraform. I want a seamless experience.
 
  • Like
Reactions: wbedard
This Thread need an UP because it really doesn't make sense that we are still in 2023 and LXC (The base of the containarization that all the docker garbage is based on) is still not fisable along side Kubernetes.
Also Having proxmox and Ceph is a must not only by the reliability but also because of the architecture that makes it much simpler to manage DataCenter Operations
I know that many here use Proxmox as home lab but i have a cluster working on the Public Health System for Brazil for 6 years non stop.
Joining Kubernetes to this architecture using VMs feels to me like Renting a Train to carry and airplane to delivery a single Mail to a city 5km away. Really cutting the chicken with a chainsaw.

Please if any of you managed to evolve this manner please come to me. I'll appretiate and im sure many brazilian citizens also!
 
This Thread need an UP because it really doesn't make sense that we are still in 2023 and LXC (The base of the containarization that all the docker garbage is based on) is still not fisable along side Kubernetes.
Also Having proxmox and Ceph is a must not only by the reliability but also because of the architecture that makes it much simpler to manage DataCenter Operations
I know that many here use Proxmox as home lab but i have a cluster working on the Public Health System for Brazil for 6 years non stop.
Joining Kubernetes to this architecture using VMs feels to me like Renting a Train to carry and airplane to delivery a single Mail to a city 5km away. Really cutting the chicken with a chainsaw.

Please if any of you managed to evolve this manner please come to me. I'll appretiate and im sure many brazilian citizens also!
From my point of view LXC is not production ready as host for Kubernetes.

After many years of attempts, finding real business cases then each time allot of issues shows up:
- Security
- Shared Kernel with the Hypervisor
- Automation
- Maintenance
- Upgrade
- Backups / Snapshots
Finally, qemu/KVM is much more suitable from my point of view.

BTW, all my previous infra under LXC have been migrated into KVMs, no more workarounds and clean hypervisors.

You are talking about storage architecture, all depends of your needs, if your plan is to have an hyperconverged infra such as :
a. VM on demand or virtual appliances which doesn't requires high IOPs, it could make sense to use CEPH as backend storage.
We could debate on CEPH, other technologies exists, but of course not implemented by Proxmox by default.
b. Kubernetes cluster(-s) (K3s, K8s, REHL OpenShift/OKD, ...), ZFS as backend storage seems more performant as soon as you choose your own suitable persistent storage provider.

At the end, everyone can say this is the best or that is worst than the other...
It depends all the time about your hardware, network topology, infrastructure, software architecture and of course the budget you have.

If you are not satisfied with a product, help the community or move away.
You have plenty of choices on the market, VMWare ESXi/vSphere, Microsoft Hyper-V/Azure, OpenStack, Citrix, XCP-ng / Xen,...
 
Hello, this is a great thread and helped me get k3s running on unprivileged LXC containers with saltstack. I further complicated the install by following the CIS hardening guidelines and HA etcd guides. I am still working on it at the moment.

tabnul's comments about too many permissions and it being difficult and broken are defiantly correct!

I am mostly worried about the security. Containerd and LXC, with a lot of access, seems it would be secure enough. Though it seems most cloud offerings are running on VMs.

What I want to know is if the performance and security of the LXC container with containerd is better than a full VM?

I tried , it worked (cluster worked) but i recerted to vms.
- too much permissions needed, basically giving full host access to k8s
- issues with snapshotter in combination with zfs backed storage - endlessly eating storage space for no reason
- i was expecting issues with longhorn, as explained above
- too many workarounds needed to get it working. Hard to automate using terraform. I want a seamless experience.

P.S. At this point with how painful it is to configure LXC containers, VMs might be better with the worst performance.

Short version of the server config.yaml I am using. I removed the CIS recommended options to keep it short.
YAML:
protect-kernel-defaults: false
kube-apiserver-arg:
  - 'feature-gates=KubeletInUserNamespace=true'
kube-controller-manager-arg:
  - 'feature-gates=KubeletInUserNamespace=true'
kubelet-arg:
  - 'feature-gates=KubeletInUserNamespace=true'
snapshotter: 'fuse-overlayfs'
 
This Thread need an UP because it really doesn't make sense that we are still in 2023 and LXC (The base of the containarization that all the docker garbage is based on) is still not fisable along side Kubernetes.
Also Having proxmox and Ceph is a must not only by the reliability but also because of the architecture that makes it much simpler to manage DataCenter Operations
I know that many here use Proxmox as home lab but i have a cluster working on the Public Health System for Brazil for 6 years non stop.
Joining Kubernetes to this architecture using VMs feels to me like Renting a Train to carry and airplane to delivery a single Mail to a city 5km away. Really cutting the chicken with a chainsaw.

Please if any of you managed to evolve this manner please come to me. I'll appretiate and im sure many Brazilian citizens also!
I was looking for the same but seems lxc is still far behind to be prepared for supporting Kubernetes. Sad. Anyway, in my case Im using terraform to deploy the vms, working very well and will setup ansible to make the final kubernetes backup. will use ansible as well to apply all the needed CIS benchmarks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!