We are all users, regardless of whether we pay a subscription, or not. Business users are not first-class citizens and homelaber's are not 2nd-class citizens. What some users do not offer in terms of monetary support to the Proxmox project, they make up in volumes in terms of patches being sent and documentation/content being created (as most of Proxmox's wide reach is indeed created by the vast amount of content created for it, mostly by homelab'ers).
I hence wholeheartedly invite you to reconsider this logic. We are all users and we must be able to consolidate and litigate issues taking our combined interest into mind. Not only that this dichotomy of users unethical, it is also inaccurate, as many people started as homelab'ers before they upgraded their use to start making money off Proxmox.
I wholeheartedly agree with this definition of what free means. Regardless of what is right to do. Failing to engage and acknowledge this simple perspective to me seems to speak of a problem in our community, which is manifesting in this thread.
Some people are clearly prepared to go to incredible lengths to convince everyone that since the VM is using a great part of this RAM for caching, and since not allowing it to use as much cache would effect its function, then this amount of cache is "used memory", and hence, it is "unavailable memory". We disagree. Many people disagree (https://www.linuxatemyram.com/). Please acknowledge that this is not a universal point of view.
Now that we know that it is misleading, we will start ignoring it, in favour of an internal probing method, but then, what use is this RAM utilisation bar indicator? maybe the right solution would be to remove it entirely.
I did not know what ballooning is until I did some reading. As far as I understood, this is an optional feature that requires an agent running inside the VM which is disabled by default.
Refusing to acknowledge the quite-valid perspective of many people here is a blocker of innovation because it is preventing us from thinking of simple ways to deal with this misleading UI problem. Indeed, there might be very simple fixes but to be able to figure them out, we need to agree that the current situation can benefit from some improvement.
I also think that when someone's point of view is acknowledged, they feel more understood which leads to more constructive dialogue. At the end of the day, you might be paying a subscription and you might think that your use of Proxmox more legit but I doubt that you would be prepared to spend a week diving into Proxmox's source code to fix an issue, whereas many homelab'ers (hobby users) are prepared to do such a thing.
Meh, LnxBil actually referenced that page to show, that your "valid concerns" are actually not very valid.
I think this would be misleading, whilst the cache can be reallocated as and when its needed, it isnt unutilised memory.Thank you for this. It is true. Yet, from a user standpoint, it would make more sense to display the buffer cache as free. Failing to acknowledge this, in my opinion, can come across (and indeed does come across) as dismissing user input.
I think this would be misleading, whilst the cache can be reallocated as and when its needed, it isnt unutilised memory.
Perhaps what you want is a 3rd metric on the graph for available memory?
Also with things like memory fragmentation, memory paging, all of this RAM is not necessarily available for new virtual machines. For a while I ran proxmox without no swap, and then started getting OOM's whilst proxmox was reporting multiple gigs as unutilised. I fixed it now with a very small zram page, and also a somewhat larger low priority page area backed by SSD, although as of yet, the SSD part has never been utilised, its happy just allocating a few 10s of MB to the zram.
Once more, RAM that is owned by a VM is not "available" to the hypervisor. It is only "available" within the VM. Outside of the special case of KSM memory is not shared between VM's. If you want that, use a container.I think this would be misleading, whilst the cache can be reallocated as and when its needed, it isnt unutilised memory.
Umm, I never said VM memory was available.Once more, RAM that is owned by a VM is not "available" to the hypervisor. It is only "available" within the VM. Outside of the special case of KSM memory is not shared between VM's. If you want that, use a container.
Why is this so hard for people to grasp?
I really don't know. Maybe they're just used to the lying all other products do? It's like the ever-green management traffic lights ... never displaying red.Why is this so hard for people to grasp?
The problem is, that the hypervisor cannot reclaim it, the guest has to reclaim it and therefore there is the discrepancy between the internal-vm and hypervisor view. The guest will not free it properly (e.g. by writing zeros), because that would not make any sense for a OS on real hardware and is therefore not done. The data is just overwritten on the next write and that's it. That's my whole point. PVE will display the actual data used, because that is important to the hypervisor. If you've overcommitted the memory, you will end up with at least a slow system.I think this would be misleading, whilst the cache can be reallocated as and when its needed, it isnt unutilised memory.
Also, KSM is ONLY available to qemu, because it uses a special type of allocation to get memory that is only implemented in QEMU (via madwise syscall). This will therefore not work with containers. VMs will scale better (with large numbers) due to KSM than containers, even if they have a much smaller footprint. For smaller stuff, containers are much better because the whole disk cache part is not part of the container memory, so you will see only the "actual" memory usage. It is even more true than without containers, because the cgroup memory jail will ensure that you won't share data with other containers (good or bad, depends on your viewpoint), whereas your default linux will share low level libraries among all programs and make it very hard to actually count the memory usage due to the sharing.Once more, RAM that is owned by a VM is not "available" to the hypervisor. It is only "available" within the VM. Outside of the special case of KSM memory is not shared between VM's. If you want that, use a container.
The problem is that it just depends on the guest os and its settings. In some use cases it just works as it should and in other it does not. You really have to know what you're dealing with in order to extract information out of this, e.g. after starting a VM, the footprint is very small, because memory is still empty, caches have not been filled and memory is not fragmented. In this instance, the graph is almost accurate (some caching has been done on boot). With time, it gets more and more inacurate unless you do some memory cleanup, compating, etc. Also tweaks like swapiness will influence how much cache is used, so it heavily depends on the guest and its settings.I do not have a concern. I am just trying to understand the situation for myself, and the conclusion I am reaching is that:
1. Some users understand that these gauges, when red, do not imply that the VM is short of memory.
2. Some users are misled by these gauges.
From a usability/support point of view, it is likely that new comers will always fall in category 2 before they move to category 1. In the process of transitioning, there will be noise, and with this noise, there will be overhead.
Yet you still need to look. It's not that there is no value in the gauge, it's just that it's not the value you might think. Imagine a VM is running for a month and only shows 50% usage. You can just lower the memory because it's like a high watermark value due to the caching.My 2 cents would be that the removal of the gauges is the best since it eliminates this confusion and does not seem to subtract any functionality. Those who know not to look, will not have to look, and those who are confused, won't be confused.
Yet you still need to look. It's not that there is no value in the gauge, it's just that it's not the value you might think. Imagine a VM is running for a month and only shows 50% usage. You can just lower the memory because it's like a high watermark value due to the caching.
The cache has to be filled by something, it does not magically fill itself up and "not running anything at all" does not matter, I thought we have explaint this in great detail already. The disk cache will fill up the remaining memory. Every file you ever read will go through the buffer cache (unless it's e.g. a O_DIRECT operation), so that every read you ever did ended up in your main memory and will be evicted if it has not enough space. In short: Unless you have more RAM than disk space, your memory will always be fully used by the buffer cache. That is exactly what https://www.linuxatemyram.com/ describes.I have not had that ever happen, even with the most underutilised VMs. The one I showed eating up 19GB was really not running anything at all.
Unless you have more RAM than disk space, your memory will always be fully used by the buffer cache. That is exactly what https://www.linuxatemyram.com/ describes.
htop
's inside a VM:Yes, this would be better, yet the hypervisor does not know of the actual usage like htop does and it does not take into account everything (e.g. ARC and hugepages), so that it'll also be not the jack of all trades views, yet better than before. You would need to have this view for every guest OS and if not available return to the current view. I think this is the main cause for no better display, because it would not be a general solution, yet only one that is harder to maintain.I'm not going to go down this rabbit-hole of the virtues of pro/against changes to RAM reporting in PVE, but maybe the graph bar in PVE could be graphically changed to something similar tohtop
's inside a VM:
View attachment 81205
Where the yellow-bar section designates available RAM being used as cache, so the administrator "gets an idea" of what the VM is doing with the RAM.
This would at least be most useful to stop numerous (duplicate) posts on these forums concerning HV/VM RAM consumption stats!
(The above would obviously be VM OS-dependent).
We use essential cookies to make this site work, and optional cookies to enhance your experience.