Search results for query: memory usage

  1. P

    Constant Kernel Panics on PVE 9 fresh install

    journalctl -b -1 -e Nov 14 13:27:06 nibbler pveproxy[3121]: worker 47899 started Nov 14 13:27:07 nibbler pveproxy[47898]: worker exit Nov 14 13:27:46 nibbler pveproxy[3121]: worker 19895 finished Nov 14 13:27:46 nibbler pveproxy[3121]: starting 1 worker(s) Nov 14 13:27:46 nibbler pveproxy[3121]...
  2. P

    Constant Kernel Panics on PVE 9 fresh install

    Hi All, I keep on getting Kernel Panics and Hangs with the Latest PVE 9 freshly installed on my server. Journalctl doesn't give me any info, this is the log from an hour ago, altough it crashed 10min ago, I have omitted the new boot logs. journalctl --since "1 hour ago" Nov 14 14:35:01...
  3. D

    Random IO Error - Windows Server 2025

    Hello everyone, I'm experiencing a random "IO Error" that causes my two Windows Server 2025 Data Center VMs to randomly halt (yellow triangle in Proxmox). A reset/reboot resolves the issue temporarily. My environment details are below. I suspect a potential conflict with my configuration...
  4. D

    Random IO Error Server 2025 Data Center

    Hello everyone, I'm experiencing a random "IO Error" that causes my two Windows Server 2025 Data Center VMs to randomly halt (yellow triangle in Proxmox). A reset/reboot resolves the issue temporarily. My environment details are below. I suspect a potential conflict with my configuration...
  5. S

    [SOLVED] vGPU just stopped working randomly (solution includes 6.14, pascal fixes for 17.5, changing mock p4 to A5500 thanks to GreenDam )

    Looks like I figured it out. I uninstalled the 17.5 driver and vgpu-unlock-rs. I rebooted to start 'clean'. Installed 17.6 driver NVIDIA-Linux-x86_64-550.163.02-vgpu-kvm-custom.run after patching it with 550.163.02.patch. Installed vgpu-unlock-rs from...
  6. T

    [NVidia] How to use one GPU as PCI VM passthrough and the other as shared compute?

    Hello all! Please lmk if this is in the wrong spot. I just finished installing a second GPU into my Proxmox host machine. I now have: root@pve:~# lspci -nnk | grep -A3 01:00 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:2d04] (rev a1) Subsystem: Gigabyte...
  7. G

    Slow memory leak in 6.8.12-13-pve

    Note : running PVE9 kernel 6.14.11-2-pve ceph 19.2.3-pve2 with Intel E810-C 4x25G in 802.3ad bonding on 4 HPE DL385 servers (1 TB RAM each) with default MTU 1500, we don't see a memory usage issue, or it's not growing fast enough to be visible in the "noise" (lots of VM).
  8. M

    Proxmox Monitoring

    ...a bit at this. I discovered that it is possible to have such cool dashboards that show all the status and info about the PVE node, like memory usage, network throughput and so on, I assume it is possible using Grafana and such. I am completely new to this and would like to set up something...
  9. RolandK

    weird fileserver issues after upgrading to proxmox 9

    ...different between the two lxc here should be samba versions. please provide/compare those the behaviour of restarting copy reminds me to some samba bug/issue when storage is slower then networking, so check memory usage of smbd processes during copy and if smbd crashes and get new pid...
  10. Impact

    Probleme mit ZFS zur Clustererstellung

    Die Logs der fehlerhaften Replikation Jobs wären hier sinnvoll. Kommt drauf an. Ich müsste mal lsblk -o+FSTYPE,LABEL,MODEL sehen. Du kannst die Blocksize in Datacenter > Storage anpassen. Standard sind 16k. Bist du sicher, dass die VM wirklich so viel RAM benötigt? Prüfe mal mit dem glances...
  11. S

    Opt-in Linux 6.14 Kernel for Proxmox VE 8 available on test & no-subscription

    I'm having the same issue, trying to vgpu a Quadro P6000 and a Titan Xp. I've patched 16.9 NVIDIA-Linux-x86_64-535.230.02-vgpu-kvm-custom.run with 535.230.02.patch, it installs fine, nvidia-smi returns both cards, as well as nvidia-smi vgpu, but nothing in medctl. nvidia-smi Sat Nov 8...
  12. H4R0

    High load on opnsense vm after upgrade to pve 9

    I don't have ballooning enabled since I had problems with that during memory pressure. Proxmox reports 2.5G, while the OPNsense GUI reports 2.4G, so very close. If I disable the qemu guest agent the Proxmox report is completely wrong though. Tried to do some more analysis on the 100mb...
  13. L

    [SOLVED] Docker inside LXC (net.ipv4.ip_unprivileged_port_start error)

    I think it's an uncontroversial statement that VMs require more resources than LXCs. By definition, the VM will always need a resource allocation that is separate from the host. Anything that you do to reduce the resource usage on a VM (e.g. use alpine) can be done with an LXC, but the resources...
  14. leesteken

    Memory usage overstated when passing through GPU

    This is normal when using PCI(e) passthrough. https://forum.proxmox.com/threads/very-high-memory-usage-on-vm.140907/post-630748 https://forum.proxmox.com/threads/pcie-passthrough.89190/post-390558 EDIT: Remove balloon: 0 if you want Proxmox to show the "memory usage" from within the VM.
  15. K

    Memory usage overstated when passing through GPU

    Hi. I have an issue where the dashboard says one of my vms consume 100% (or more ) of the ram allocated but when I run TOP consumption is much lower. ANybody any ideas or files I should share. I am passing a GPU through. It is an Ubuntu server/ Conf file below: agent...
  16. D

    [SOLVED] Docker inside LXC (net.ipv4.ip_unprivileged_port_start error)

    I do use LXC for Docker and individual services like Technitium and avahi (because Unifi's implementation isn't that good).
  17. J

    [SOLVED] Docker inside LXC (net.ipv4.ip_unprivileged_port_start error)

    Exactly.. Something like pi-hole which is self-containing (so doesn't need network shares and fiddling around with bind mounts) or jellyfin (which can be installed with apt install under Debian) where I want to use the host hardware (for jellyfin iGPU) but also share it with other lxcs/the host...
  18. B

    vGPU doesn't work with pytorch/nccl/vllm

    sure! $ nvidia-smi Fri Nov 7 14:13:38 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.95.05 Driver Version: 580.95.05 CUDA Version: 13.0 |...
  19. D

    Question about NUMA nodes and core pinning

    I have a host with two NUMA nodes, and I would like to create a VM with two NUMA nodes, with the cores from each VM node pinned to cores on the corresponding host node. So far, I've found the numa[n] options. But I'm a bit unclear about their usage, and the docs are pretty sparse. I tried...