Search results

  1. W

    Question about NUMA nodes and core pinning

    Check this reply: https://forum.proxmox.com/threads/correct-vm-numa-config-on-2-sockets-host.173595/post-807464
  2. W

    Poor Windows VM Performance with over 64GB RAM assigned

    Please check cat /proc/meminfo| grep Huge In my setup I had to define: hugepagesz=1G hugepages=N default_hugepagesz=1024M Where N - is number of HP with respect to VM memory size P.S as far as I know 1G and 2M hugepages cannot be combined (once again check /proc/memory)
  3. W

    Poor Windows VM Performance with over 64GB RAM assigned

    To use giant pages (hugepages 1024Mb) u need: - explicitly set fixed number of such pages in boot loader (/etc/default/grub or /etc/kernel/cmdline) - set hugepages: 1024 in vm conf file (manually) I would also recommend setting up numa topology in vm config file (manually) - check PVE docs With...
  4. W

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    In our environment, we use dozens of Windows Server 2019/2022, and we don't see any issues with version 1.285 drivers. However, there is only 1 server with the MSSQL database engine (2019, if I'm not mistaken), and I'm not entirely sure if it has been updated to the virtio drives version 1.285...
  5. W

    W2025 virtio NIC -> connection drop outs

    Have you fired bug report on virtio github?
  6. W

    Correct VM NUMA config on 2 sockets HOST

    Thank It's all clear now. Thanks.
  7. W

    Correct VM NUMA config on 2 sockets HOST

    Good day Help me figure out and implement the correct virtual machine configuration for a dual-socket motherboard (PVE 8.4, 6.8.12 kernel) Given: root@pve-node-04840:~# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order...
  8. W

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    I didn't manage to insall v266 on dozen of my vms. MSI fails with revert unfortunately
  9. W

    [viogpu3d] Virtio GPU 3D acceleration for windows

    Does anyone have precompiled drivers?
  10. W

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    On half of my servers I can't install new Virtio Tools. Setup breaks and reverts with error 0x80070643 Any clue what could be wrong?
  11. W

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    From my perspective virtio scsci driver is the key here. Still waiting for any progress from virtio devs. Be free to ping them at the corresponding topic at the github (check the link in messages above)
  12. W

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    I’m still on pve 6.5 Will wait up to 2-3 kernel updates before an upgrade
  13. W

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    In my case I don't have non-existent export shares. Only dozen active exported from different ZFS pools/dataset
  14. W

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    Some images First server (active) - noticeable memory leak (I had to drop ARC cache at the end) Second server (exactly the same but without NFS activity - redundant storage with ZFS replicas)
  15. W

    Proxmox VE 8.2 released!

    There is possible memory leak with kernel 6.8.3 and nfs-kernel-server (Check this thread: https://forum.proxmox.com/threads/memory-leak-on-6-8-4-2-3-pve-8-2.146649/ )
  16. W

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    Same story. After upgrade to pve 8.2 (kernel 6.8.4-3) I’m facing the same memory leak. Zfs pool with NFS shares and high IO load (ZFS arc size limited and does not grow up)
  17. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Can confirm: kenrel 6.8.4, ksm enabled, numa balancing=1, all kernel mitigations on Works as expected. No ICMP echo reply time increase, no any other freezes. Finnaly!