Version 100.85.104.20800 installed now.
[/CODE]
With virtio 0.1.266 driver version should be 100.100.*
Version 100.85.104.20800 installed now.
[/CODE]
vioscsi
...aio=io_uring
or aio=native
with vioscsi
, so exposed to the qemu global mutex.VirtIO block
. This is the viostor
driver and not vioscsi
.VirtIO SCSI single
and not VirtIO block
...!both don't use the fixed driver.Switched VM OS disk from VirtIO block to ide
I'd tend to agree with the other suggestions in this thread that this might rather be a performance problem of the underlying storage. I'd expect a RAIDZ2 pool with spinning disks to be quite slow, and the ahcistor warnings when attaching the disks via SATA (the VirtIO SCSI/Block guest drivers wouldn't be involved in that case) and that the issues improve when scrubbing on the host is paused, also hints into that direction. If you'd like to debug this further -- can you check the IO pressure in /proc/pressure/io [1] while the VM is running / while you are seeing the issues? Also, would it be possible to temporarily move the VM disks to a fast local storage (like local SSD or NVME), and see if you still see issues then? If you'd like to look into this further, it would be great if you could open a new thread -- feel free to reference it here.Version 100.85.104.20800 installed now.
I made a few other changes since that last post.
I was still seeing errors (1/min), but interestingly the general disk stability was a lot better and was avoiding the 1 minute pauses seen previously. The backups are now working, so pressure to fix this has dropped off substantially.
- Stopped scrubbing runs on both Proxmox servers - this seemed to be having the most impact, and is probably the cause of the OS lockups
- Switched VM OS disk from VirtIO block to ide (This has no requirement to be fast, just needs to be reliable) - There had been no reset warnings related to this
- Downgraded the vioscsi driver to 208 as mentioned
Are these Windows VMs using VirtIO SCSI, and if yes, do you also see the device resets discussed in this thread in the Windows event viewer? The issue rather sounds like the underlying storage may be the culprit. Could you please open a new thread and provide some more details, including the output ofI regret to share that I am using Proxmox VE 8.3 with spinning disks and kernel 6.8, and despite varying the VM configurations, there is consistently a very high I/O delay. This issue is particularly noticeable during operations involving both network and disk activity, such as backups, restores, and snapshots.
pveversion -v
, the config of an affected VM (the output of qm config VMID
), the storage configuration (the contents of /etc/pve/storage.cfg
) and some more details on your storage setup?These are Linux-based virtual machines (VMs) only with Linux. All the VMs experienced significant I/O wait issues, but the Proxmox host itself does not appear to be affected.Are these Windows VMs using VirtIO SCSI, and if yes, do you also see the device resets discussed in this thread in the Windows event viewer? The issue rather sounds like the underlying storage may be the culprit. Could you please open a new thread and provide some more details, including the output ofpveversion -v
, the config of an affected VM (the output ofqm config VMID
), the storage configuration (the contents of/etc/pve/storage.cfg
) and some more details on your storage setup?