Search results for query: memory usage

  1. Y

    Proxmox zeigt deutlich höheren RAM-Verbrauch an als tatsächlich genutzt

    Hi, auf dem Host wird ZFS verwendet, ja. Allerdings nur für die Boot Medien, und damals für die Anbindung einer SAN. Hier das Bild der Memory Usage: Der ARC Cache: root@xego01:~# arc_summary -s arc ------------------------------------------------------------------------ ZFS Subsystem Report...
  2. Impact

    Proxmox zeigt deutlich höheren RAM-Verbrauch an als tatsächlich genutzt

    ...nehme mal an, dass der RAM irgendwo Kernel seitig benutzt wird. Wird auf dem Host auch ZFS benutzt? Ich würde gerne mal ein Bild von dem Memory Usage Diagram in node > Summary und die Ausgabe von arc_summary -s arc sehen. cat /proc/meminfo könnte vielleicht für die Speicher Experten hier...
  3. S

    Slow memory leak in 6.8.12-13-pve

    I will replace 6.14.11-3-pve with 6.8.12-11-pve tomorrow morning on all nodes. Since I have no plan to add charges in the next few days, it will be a good way to compare.
  4. M

    Slow memory leak in 6.8.12-13-pve

    ...diskspd.exe load for a full week (~1 % wear on PM9A3 drives). With kernels 6.8.12-11-pve and 6.14.11-3-pve I saw no memory leaks at all, while the “-icefix” builds still showed a slow increase. I eventually replaced all E810s with Mellanox, and since then memory usage has remained stable.
  5. B

    Slow memory leak in 6.8.12-13-pve

    ...do rise for the first few hours, but eventually seem to flatten out. I see something like 300MB of used buffer_head slab allocations (per slabtop) which of course seems pretty excessive, but total system memory usage appears to be flat over the last six hours with the 6.14 ice-fix build.
  6. fiona

    Timeouts when calling a node's qemu endpoint

    Hi, Proxmox VE 9 collects more stats about virtual machines, which requires a bit more time: However, there was a recent improvement in qemu-server >= 9.0.23, currently available in the pve-test repository, that can help in certain situations (from apt changelog qemu-server):
  7. P

    Timeouts when calling a node's qemu endpoint

    ...all VMs (about 30-50) it still was ok... after some time it grew worse. We overcommit intentionally but keep the load as well as CPU and memory usage in a viable range. Most nodes do have a swap partition on an Optane disk (~700 GB), some use a dedicated NVMe disk. KSM is max 50 GB, typically...
  8. Max Carrara

    Crashing API PBS 4.0.11

    What are the specs of your hardware? Have you monitored I/O pressure, memory & CPU usage, etc.?
  9. H4R0

    High load on opnsense vm after upgrade to pve 9

    I'm also running multiple opnsense instances on Proxmox and did not notice any CPU increase since PVE9 @aklausing you should install `os-qemu-guest-agent` plugin in opnsense and enable `Qemu Guest Agent` in Proxmox VM Option s. Then you get the real memory usage on Proxmox.
  10. leesteken

    VM Memory Usage Shows 102% After Upgrading to PVE 9

    Because of the PCI(e) passthrough, all VM memory must be pinned into actual host memory, and the memory usage of the VM is therefore at least 100%.
  11. T

    Memory leak in 4.0.14

    @l.leahu-vladucu After reviewing the links you provided this has not fixed the issue. Also what I have found is if I reboot the PBS VM the memory does not decrease on the node the VM is hosted on. Only if I shut down the VM does the RAM go back to normal usage.
  12. S

    VM Memory Usage Shows 102% After Upgrading to PVE 9

    Hello everyone, After upgrading to Proxmox VE 9, I noticed an issue with one of my virtual machines: its memory usage shows 102%. This problem did not occur on Proxmox VE 8. The QEMU Guest Agent and Ballooning are both enabled, but the issue still persists. What should I check or adjust next?
  13. mr44er

    pve 9 "memory on pfsense ? "

    No, still on 8. I was therefore surprised by the sentence "FreeBSD is known to not report memory usage details, which includes popular firewalls like pfSense or OPNsense."
  14. aaron

    pve 9 "memory on pfsense ? "

    Besides the Ballooning Device being enabled, is the BalloonService running in the Windows VM? Does it go down after a bit of waiting? In the screenshot, the VM has an uptime of 36 seconds... chances are that the VM hasn't fully booted yet and therefore the BalloonService might not be running yet.
  15. leesteken

    Memory usage over 100% in PVE 9.0.10 – is this normal?

    https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#VM_Memory_Consumption_Shown_is_Higher :
  16. R

    Memory usage over 100% in PVE 9.0.10 – is this normal?

    Hi all, I’m running Proxmox VE 9.0.10 and noticed something strange in the dashboard. The memory usage shows 102.9% (8.23 GiB of 8.00 GiB). Is it normal for PVE to show memory usage above 100%, or could this indicate an issue with my setup? The CPU usage seems fine. Attached is a screenshot...
  17. K

    Proxmox suddenly stopped working

    Hi there, I run Proxmox VE9.0.9 on a NUC 8 I7 BEH This contains a VM with Home Assistant and a container with Paperless-ngx installed on it Turns out Proxmox started failing last night around 04:47:48 and was no longer reachable. Around 6:18 AM I did a manual reboot (power interrupted) of the...
  18. B

    Problem with PDM Administrator User

    ...and is only affecting the Dashboard: Guests With the Highest CPU Usage, Nodes With the Highest CPU Usage, Nodes With the Highest Memory Usage panels and the section SDN: EVPN , returning an api error (status = 403: permission check failed). All the rest works fine. When I am logged-in as...
  19. V

    High IOPS on hosts but low IOPS on vms(using ceph)

    Hi everyone,lm testing a new PVE cluster(8.4.13,4 hosts) now and facing a problem. l've already create ceph with 14 osd (6ssd+8hdd).After creating a new vm(rocky linux 8,2 sockets 4 cores HOST CPU,8Gi Memory 500GHARD DISK from ceph_ssd),then l used fio to test IOPS: fio --name=TEST-randwrite...
  20. M

    Swap space usage with ZFS

    In my humble opinion, what you’re seeing is just normal Linux behavior: it may swap idle pages even with free RAM to keep ARC/cache. Personally I’d suggest trying zram (with zram-tools or systemd-zram-generator) instead of disabling swap — it gives you compressed RAM-based swap, avoids NVMe...