install portainer.I want a simple solution, not all of the "enchilada" that @alexskysilk notes.
install portainer.I want a simple solution, not all of the "enchilada" that @alexskysilk notes.
Can you elaborate? I thought QEMU and therefore PVE is able to use NUMA.I mean we still don't even have Numa Support, which makes Proxmox mostly useless on Dual-Socket or Newer Single-Socket AMD Plattforms like Genoa compared to other Hypervisors.
This is exactly what we want:Can you elaborate? I thought QEMU and therefore PVE is able to use NUMA.
If you enable this feature, your system will try to arrange the resources such that a VM does have all its vCPUs on the same physical socket
No, I assume that they run on the numa node from which the memory is used to reduce the inter-numa-node-communication.So you assume, according to the Wiki, that all vCPU's of one VM should run all on the same Chiplet (or numa-node).
Where is your proof? You just provide anecdotal evidence, which is totally useless to interpret.The issue is, that this doesnt work at all. Not even a bit.
$ numastat
node0 node1
numa_hit 180134743078 129405155945
numa_miss 2028661746 461859704
numa_foreign 461859704 2028661746
In the context of NUMA (Non-Uniform Memory Access) configurations, it's crucial to understand that the significance extends beyond just memory; the L3 cache plays a pivotal role. On Genoa platforms, L3 caches are distributed across NUMA nodes, with each cache supporting eight threads. This distribution is similar to how memory is handled, but with a key distinction:Maybe we should split off to a new thread, this is much more interesting than than another homelabber chiming in to want a new Docker GUI.
No, I assume that they run on the numa node from which the memory is used to reduce the inter-numa-node-communication.
Where is your proof? You just provide anecdotal evidence, which is totally useless to interpret.
Trying to understand what you want to say and inspecting my dual-socket intel machines, my numastat shows, that I have this:
Code:$ numastat node0 node1 numa_hit 180134743078 129405155945 numa_miss 2028661746 461859704 numa_foreign 461859704 2028661746
Which shows that node0 has a miss ratio of 1,1% and node1 0,35% which are both far from "not even a bit". This is the worst numstat I found, others have even lower misses.
I don't have any AMD machine at my hand right now, so how does it look at your machine? Have you configured NUMA for EACH VM?
Thank you very much for the detailed explanation. I wasn't aware of the cache situation, which is completly feasible.There are your proofs and whatever you want.
You will not solve the memory NUMA allocation, yet the cache allocation. I just tested it with mbw benchmark and on the hypervisor, the QEMU process got memory from both (I have two) nodes. CPU pinning will give better performance, yet as you already stated, not that much on intel. The difference varies on the ratio of the memory distribution over the numa nodes, yet allocating all the wrong cpus will significantly worsen the problem also up to 2.5x slower, yet this is worse than the default with cycling around so it may just be a strong cornercase.You can fix this yourself if you use cpu-pinning and do it yourself for each VM.
But with a lot of VM's and additionaly if you move them between hosts, its simply Impossible to use cpu-pinning.
That is already available in the configuration file, yet not via the GUI and not automatically. I played around with it in this thread. It seems to work and I am really interested in seeing if it would be a solution for you and if it will be faster (and easier to setup than just running taskset).You will not solve the memory NUMA allocation
When using Docker in Proxmox we can do distributed development.Proxmox: KVM | LXC | Docker
It would completely dominate the market as it can manage all 3 type of platforms
This is my big technical reason for not wanting to run it on Proxmox, everything else aside. Proxmox is a complex system running with root access.When using docker in business, I faced the problem that rootless docker is optional and many docker images does not work correctly in rootless mode. I would never ever run docker as root on my PVE host. Another reason is that docker creates lots of virtual network connections which might disturb your host network. Plus, docker can create huge amounts of data in /var/lib/docker.
This buys a lot for little: fewer "how do I run Docker on the node" threads, fewer networking foot‑guns, faster onboarding for mixed dev/ops teams, and a single pane of glass that still respects Proxmox’s scope. It’s additive to KVM and LXC, NOTE A PIVOT, and it makes Proxmox the obvious home for shops that live on OCI images, CI/CD, and modern app delivery.
It is definitely necessary for the longevity and health of Proxmox as a platform REGARDLESS of the technical considerations. I'm sorry, but anyone looking at this from an objectively purist standpoint from engineering is simply missing the mark. This is simply a fact if you want to see Proxmox thrive.
"Support" here doesn’t mean turning PVE into a PaaS or re‑implementing Kubernetes. It can be narrow, safe, and optional: a Container Hosts wizard that spins a known‑good cloud‑init VM with Docker/containerd; automatic discovery/registration of a Portainer endpoint; and a lightweight Containers pane that shows stacks, lets me start/stop/restart, view basic logs, and offers an Open in Portainer link. No Docker on the PVE host, boundaries intact, PBS backups and HA stay VM‑level.
I don’t want the whole enchilada - I'm not saying that. I simply know we NEED a sane, paved road, and a simple UI so we're not memorizing shell incantations all day.
Incus allows to run oci-Containers in recent versions. The support is limited though ( if I recsll correctly not all features from docker work).Is there any competing solution that offers VM/LXC (or other container)/and Docker all within the same unified interface? That is, a featureset that you'd like to see PVE match?
ok seem i was not clear enough"Support" here doesn’t mean turning PVE into a PaaS or re‑implementing Kubernetes. It can be narrow, safe, and optional: a Container Hosts wizard that spins a known‑good cloud‑init VM with Docker/containerd; automatic discovery/registration of a Portainer endpoint; and a lightweight Containers pane that shows stacks, lets me start/stop/restart, view basic logs, and offers an Open in Portainer link. No Docker on the PVE host, boundaries intact, PBS backups and HA stay VM‑level.
These shops have already procedures for setting up servers with docker et al. Why should it be a problem for them to do the same with a VM? With ansible and cloud-init ( or something similiar), which they propably already use this is fully automated, a wizard still needs manual intervention.
Are there really devops people who can't setup this? I always assumed that's part of the job but what do I know...
Why? For people running Windows VMs and need a affordable alternative to VmWare ( most German smbs) such a feature doesn't change anything. For people who go the full kuberbetes route a bare metal Kubernetes cluster is more sensible than the overhead of virtualization and they can also easily setup a cluster with vms.
For what's worth: AFIK there are plans to add a way to convert oci- to lxc-Containers. In my humble opinion this is a bad idea since the result will be something not supported by the application developers just to appease homelabbers. I would prefer to invest development in other stuff like PDC or still missing Roadmap items.(1) But that's not my call to make
( 1 ) Even host-backup ( also mot really needed imho ) would be a better investment
Is there any competing solution that offers VM/LXC (or other container)/and Docker all within the same unified interface? That is, a featureset that you'd like to see PVE match?
We use essential cookies to make this site work, and optional cookies to enhance your experience.