Running proxmox inside kubernetes

telmich

New Member
Nov 26, 2023
3
0
1
Hello,

we are currently evaluating different virtualization stacks for upgrading our current infrastructure. As in many organisations, we are also running a lot of our infrastructure already in kubernetes, currently alongside the existing virtualization solution.

I am aware that conceptually openstack with either openstack-helm or yaook can be run on top of kubernetes. We are running (parts of) opennebula inside kubernetes.

My question to the proxmox experts in here is, how (un-)thinkable is it to run proxmox inside kubernetes?

As a first sketch, I would imagine it as follows:

* all proxmox components can run in a debian based container (or if it exists, an official proxmox container)
* we can deploy one proxmox container per host
* networking can either be realised using the hostnetwork or multus
* containers probably need to be privileged due to being able to run qemu/kvm

These are just initial thoughts on how it could be made working - I was wondering if anyone here already tried that or whether there are general show stoppers for this approach?
 
Hello,

we are currently evaluating different virtualization stacks for upgrading our current infrastructure. As in many organisations, we are also running a lot of our infrastructure already in kubernetes, currently alongside the existing virtualization solution.

I am aware that conceptually openstack with either openstack-helm or yaook can be run on top of kubernetes. We are running (parts of) opennebula inside kubernetes.

My question to the proxmox experts in here is, how (un-)thinkable is it to run proxmox inside kubernetes?

As a first sketch, I would imagine it as follows:

* all proxmox components can run in a debian based container (or if it exists, an official proxmox container)
* we can deploy one proxmox container per host
* networking can either be realised using the hostnetwork or multus
* containers probably need to be privileged due to being able to run qemu/kvm

These are just initial thoughts on how it could be made working - I was wondering if anyone here already tried that or whether there are general show stoppers for this approach?
While I myself do not appreciate when I get replies to my own questions to the effect of "why would you want to do that" with the aftertaste of the other person trying to tell me something special, I really re-read your question twice, originally I thought it would be about running k8s on PVE nodes which would be fairly straightforward.

I currently experiment running PVE cluster on top of Xen, that seems to be all just fine. And it is mostly because I do not find the IAM all that coherent and would prefer e.g. have entirely separate clusters for separate users and I only need it for LXCs.

But back to your question - and my curiosity - in what setup would you want to run it inside k8s? The point of k8s would be to have relatively ephemeral pods around, which when they die, are replaceable. They are not restarted or and replacements are not identical. So coming back to the usual use case for PVE deployment scenario with cluster nodes, this is rather hard to imagine (for me) in terms of usefulness. If you are considering single-node PVE, this would likely be possible in the way you envisage, but then having to run privileged containers defeats the purpose I am afraid (as having it the other way around with PVE being the host or separate altogether, i.e. PVE as a regular VM).

What am I missing?
 
While I myself do not appreciate when I get replies to my own questions to the effect of "why would you want to do that" with the aftertaste of the other person trying to tell me something special, I really re-read your question twice, originally I thought it would be about running k8s on PVE nodes which would be fairly straightforward.

I currently experiment running PVE cluster on top of Xen, that seems to be all just fine. And it is mostly because I do not find the IAM all that coherent and would prefer e.g. have entirely separate clusters for separate users and I only need it for LXCs.

But back to your question - and my curiosity - in what setup would you want to run it inside k8s? The point of k8s would be to have relatively ephemeral pods around, which when they die, are replaceable. They are not restarted or and replacements are not identical. So coming back to the usual use case for PVE deployment scenario with cluster nodes, this is rather hard to imagine (for me) in terms of usefulness. If you are considering single-node PVE, this would likely be possible in the way you envisage, but then having to run privileged containers defeats the purpose I am afraid (as having it the other way around with PVE being the host or separate altogether, i.e. PVE as a regular VM).

What am I missing?
It's actually very simple:

Look at Kubernetes more like "a system that ensures everything is running" rather than "only run short lived / ephemeral workload".

Instead of installing some hosts with Debian/proxmox we can install all hosts with the same OS (Alpine Linux in our case) and the actual workload is then running in a container.

Actually a lot of stateful applications run in k8s, using PVC for persistent storage and ceph (using rook) is also a common pattern, so there is almost everything in place for "just running proxmox" on top of it.
 
It's actually very simple:

Look at Kubernetes more like "a system that ensures everything is running" rather than "only run short lived / ephemeral workload".

Instead of installing some hosts with Debian/proxmox we can install all hosts with the same OS (Alpine Linux in our case) and the actual workload is then running in a container.

Actually a lot of stateful applications run in k8s, using PVC for persistent storage and ceph (using rook) is also a common pattern, so there is almost everything in place for "just running proxmox" on top of it.

In case of your indicated scale, I would guess you are about to run k8s on OpenStack, in which case I wonder what's the added value of PVE on top of all that? You can have all the workloads already on OpenStack. I think one goes for PVE to avoid the extra complexity (which you already can handle), no other reason in my mind at the least.
 
Hi, I ran across this thread because I had the same thought as the author. My hypothetical use case is nowhere near as complex though. I have 4x physical PVE and all ceph between them. Linode for example offers a 'Fully managed Kubernetes infrastructure', which if I'm not mistaken runs on top of their NVMe block storage. The tech specs of that are greater than my existing storage can achieve, but the wan link would be the bottleneck in transit. I was curious, if it were possible to run a pve host inside kubernetes, if that would be useful to extend ceph out and in the process inherently gain offsite replication/redundancy. All PVEs would be interconnected via vpn. Perhaps the Proxmox Backup Server as a container also?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!