Is anyone here actually running kubernetes on their LXC containers?

windowsrefund

New Member
Sep 30, 2021
1
0
1
42
I know I've been able to get k3s running on my LXC containers in the past. This was when I just ran a straight up OS and setup LXD myself.

Then I decided to try and be clever, wiped my box, and installed Proxmox in order to, among other things, benefit from the support for LXC containers.

Between the well documented problems with /dev/kmsg (mistakenly marked as resolved in this forum) and other issues involving cgroups, I just want to know if anyone here is actually running kubernetes on their containers? I have a sneaking suspicion everyone just bailed on this approach and just went to the VM side of things. Please prove me wrong :)
 

arkan

New Member
Nov 29, 2021
13
2
1
44
1638181388532.png

In the container:

modprob: FATAL: Module br_netfilter not found in directory /lib/modules/5.13.19-1-pve
modprob: FATAL: Module overlay not found in directory /lib/modules/5.13.19-1-pve


1638181433009.png

1638181570859.png


But on my host the overlay module is loaded:

1638181665205.png



Any idea?
 

tabnul

New Member
Jan 3, 2021
16
1
3
39
I followed the same guide and it works.

My K3s does however throw those errors as well, seems to be false positive.
proxmox 6.4 latest, ubuntu 20.04 lxc, latest k3s stable
 

tabnul

New Member
Jan 3, 2021
16
1
3
39

vherrlein

New Member
Feb 1, 2022
2
0
1
34
Hi,

I’m running a k3s cluster of 3 nodes and 1 master within privileged LXC containers.

All of them are hosted into a Proxmox 7.1 cluster of 3 machines with a complex SDN topology.

So it’s feasible, but really painful when you starts from scratch.

You have allot of tips and tricks to put in place in order to make sure Container-d will work seamlessly, additional modules setup for Kubernetes networking…

My initial setup was the following.
  • Host machine:
    • Debian Bullseyes based (Proxmox VE 7.0)
    • LXC v4.0.9-4
    • Additional Kernel Modules loaded at startup (/etc/modules):
br_netfilter
overlay
aufs
ip_vs
nf_nat
xt_conntrack

  • LXC Guest Machine:
    • Pure Debian Bullseyes
    • LXC extra settings:
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow:
lxc.cgroup.devices.deny:
lxc.cap.drop:
lxc.mount.auto: "proc:rw sys:rw cgroup:rw:force"
 

vherrlein

New Member
Feb 1, 2022
2
0
1
34
From my experiences the major drawback using LXC containers for Kubernetes is the fact you will not be able to implement certain persistent storage provider such as :
- OpenEBS
- Kasten
- Longhorn
- Rook
Mostly because it requires to access low level kernel apis and I already spent too much time on tries.

From my side I went to Kadalu which is an open-source encapsulated GlusterFS architecture within Kubernetes.
It’s a young project but quite promising.

So, if you are looking for simplicity, don’t go to LXC, instead use a pre made bundle.
The resource overhead of a full virtualization is negligible.
 

tabnul

New Member
Jan 3, 2021
16
1
3
39
I tried , it worked (cluster worked) but i recerted to vms.
- too much permissions needed, basically giving full host access to k8s
- issues with snapshotter in combination with zfs backed storage - endlessly eating storage space for no reason
- i was expecting issues with longhorn, as explained above
- too many workarounds needed to get it working. Hard to automate using terraform. I want a seamless experience.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!