Search results

  1. W

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    On half of my servers I can't install new Virtio Tools. Setup breaks and reverts with error 0x80070643 Any clue what could be wrong?
  2. W

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    From my perspective virtio scsci driver is the key here. Still waiting for any progress from virtio devs. Be free to ping them at the corresponding topic at the github (check the link in messages above)
  3. W

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    I’m still on pve 6.5 Will wait up to 2-3 kernel updates before an upgrade
  4. W

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    In my case I don't have non-existent export shares. Only dozen active exported from different ZFS pools/dataset
  5. W

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    Some images First server (active) - noticeable memory leak (I had to drop ARC cache at the end) Second server (exactly the same but without NFS activity - redundant storage with ZFS replicas)
  6. W

    Proxmox VE 8.2 released!

    There is possible memory leak with kernel 6.8.3 and nfs-kernel-server (Check this thread: https://forum.proxmox.com/threads/memory-leak-on-6-8-4-2-3-pve-8-2.146649/ )
  7. W

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    Same story. After upgrade to pve 8.2 (kernel 6.8.4-3) I’m facing the same memory leak. Zfs pool with NFS shares and high IO load (ZFS arc size limited and does not grow up)
  8. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Can confirm: kenrel 6.8.4, ksm enabled, numa balancing=1, all kernel mitigations on Works as expected. No ICMP echo reply time increase, no any other freezes. Finnaly!
  9. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Will do my best as soon as I get a chance and report back
  10. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    @fweber I've manage to dedicate 1 node with mitigations=on and single client RDS server (who is working free of charge and should not complaint to much) So, I'm ready to test new kernel with patch if you provide such Right now numa balancing has been switched off and RDS server works smoothly...
  11. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Thanks a lot! Just some highlights that drove me crazy if disabling numa balancing is completely legitimate solution shouldn't it be disabled by default?
  12. W

    Disable fs-freeze on snapshot backups

    Then check this thread and links to virtio driver git (there are some advices from devs) If you are able to reproduce this issue easily it would be very helpful to find a solution or workaround
  13. W

    Disable fs-freeze on snapshot backups

    @roms2000 have you checked syslog after reboot? Is there anything related to ID 129 in windows syslog?
  14. W

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    I performed some tests on my cluster and can confirm that tuning vzdump.conf could be used as workaround max average throughput of my 5 nodes cluster (with CEPH) = ~800MiB/s and PBS storage = ~300MiB/s 1) I set vzdump.conf as following: bwlimit: 150000 ionice: 8 2) On PBS I limited input...
  15. W

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    However with ionice=8 set in vzdump.conf I can see it in backup log INFO: starting new backup job: vzdump --exclude 101,100,103 --notes-template '{{guestname}}' --storage PBS --mode snapshot --mailto ... --all 1 --mailnotification failure --node 063-pve-04446 INFO: Starting Backup of VM 6302...
  16. W

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    PVE and virtio devs are already aware of this problem and I hope they will find out what goes wrong and fix it asap
  17. W

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    What do mean by "such high values"? Defaults are: 2Mb with 60s timeout, tuned: 256Kb with 90s timeout You are free to test just one of them anyway
  18. W

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    I will try the following: tune virtio-scsi driver settings via windows registry set ionice=8 in vzdump.conf on all nodes in cluster

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!