Recent content by Whatever

  1. W

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    I didn't manage to insall v266 on dozen of my vms. MSI fails with revert unfortunately
  2. W

    [viogpu3d] Virtio GPU 3D acceleration for windows

    Does anyone have precompiled drivers?
  3. W

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    On half of my servers I can't install new Virtio Tools. Setup breaks and reverts with error 0x80070643 Any clue what could be wrong?
  4. W

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    From my perspective virtio scsci driver is the key here. Still waiting for any progress from virtio devs. Be free to ping them at the corresponding topic at the github (check the link in messages above)
  5. W

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    I’m still on pve 6.5 Will wait up to 2-3 kernel updates before an upgrade
  6. W

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    In my case I don't have non-existent export shares. Only dozen active exported from different ZFS pools/dataset
  7. W

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    Some images First server (active) - noticeable memory leak (I had to drop ARC cache at the end) Second server (exactly the same but without NFS activity - redundant storage with ZFS replicas)
  8. W

    Proxmox VE 8.2 released!

    There is possible memory leak with kernel 6.8.3 and nfs-kernel-server (Check this thread: https://forum.proxmox.com/threads/memory-leak-on-6-8-4-2-3-pve-8-2.146649/ )
  9. W

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    Same story. After upgrade to pve 8.2 (kernel 6.8.4-3) I’m facing the same memory leak. Zfs pool with NFS shares and high IO load (ZFS arc size limited and does not grow up)
  10. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Can confirm: kenrel 6.8.4, ksm enabled, numa balancing=1, all kernel mitigations on Works as expected. No ICMP echo reply time increase, no any other freezes. Finnaly!
  11. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Will do my best as soon as I get a chance and report back
  12. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    @fweber I've manage to dedicate 1 node with mitigations=on and single client RDS server (who is working free of charge and should not complaint to much) So, I'm ready to test new kernel with patch if you provide such Right now numa balancing has been switched off and RDS server works smoothly...