Hi everyone
First off, I do appreciate that measuring memory usage on modern OSs is an inherently tricky topic and there's not really one definitive "correct" answer to "how much free memory I have", given all the intricacies involving virtual memory, filesystem caching, etc.
That being said however, on my home-lab, three-node PVE (8.2.4) setup, I am observing memory usage figures that make less and less sense to me. I have three nodes with 32GB each. On one node, for instance, I have one 640MB LXC container running, one VM with 4GB assigned and another one with 8GB running. With this workload, PVE dashboard is showing 27GB (88%) used.
I can't rule this out as a coincidence but it feels like it got this bad since I began playing with Kubernetes. I'm running a k3s cluster spanning three Ubuntu instances 8GB each on separate physical PVE hosts. I'm using Longhorn storage on it; is it possible that's the culprit by, I don't know, memory-mapping its volumes which causes them to erroneously count against physical memory usage?
Is there currently any solution to this problem other than summing up memory allowances of all the VMs running and guesstimating whether you're sill fine or not based on that?
First off, I do appreciate that measuring memory usage on modern OSs is an inherently tricky topic and there's not really one definitive "correct" answer to "how much free memory I have", given all the intricacies involving virtual memory, filesystem caching, etc.
That being said however, on my home-lab, three-node PVE (8.2.4) setup, I am observing memory usage figures that make less and less sense to me. I have three nodes with 32GB each. On one node, for instance, I have one 640MB LXC container running, one VM with 4GB assigned and another one with 8GB running. With this workload, PVE dashboard is showing 27GB (88%) used.
I can't rule this out as a coincidence but it feels like it got this bad since I began playing with Kubernetes. I'm running a k3s cluster spanning three Ubuntu instances 8GB each on separate physical PVE hosts. I'm using Longhorn storage on it; is it possible that's the culprit by, I don't know, memory-mapping its volumes which causes them to erroneously count against physical memory usage?
Is there currently any solution to this problem other than summing up memory allowances of all the VMs running and guesstimating whether you're sill fine or not based on that?