Try to change your KVM from virtio to ide:
from:
virtio0: local:900/vm-900-disk-1.qcow2,size=32G
to:
ide0: local:900/vm-900-disk-1.qcow2,size=32G
stop/start the VM and try again.
I think you are in the same virtio IO wait/delay boat with us :(
https://forum.proxmox.com/threads/3000-msec-ping-and-packet-drops-with-virtio-under-load.36687/
Unfortunately there isn't any official Proxmox team response yet about this and the only temporary workaround found so far is to...
If you just want to migrate your 3.x VMs to a new PVE 5 installation you don't need to backup or restore anything from the Proxmox host. You are not migrating the Proxmox node, you are just reallocating the VMs to a new place. You need to create a new PVE 5 installation with the same (or similar...
@Andreas Piening You have to change the bus from virtio to ide on the hard disks, not the controller type (virtio-scsi).
You shouldn't have problem with booting after the change from virtio to ide I think. Returning back shouldn't be a problem also because you already have the virtio drivers...
Most probably there are more interrupts because of the IDE. Yes, it will some impact on the full disk write/read performance (maybe 10/20/30% lower throughput). Currently (ide) vm is making about 50 Мegabytes/s write perormance. This is more than enough for me. With virtio probably it will be...
I have switched almost all of my vms from virtio to ide. Here is an example graph of the iowait difference from one of my nodes:
This node is still running the same vms with the same workload on them. Those red peaks (and some green maybe) in the ide section of the graph I suspect are because...
I don't think it does matter. The IO Wait is showing up on all other nodes cpus/cores/threads (not only on the one who is running the dd test).
This node currently is (intentionally) left only with two VMs with assigned 1 socket 8 cores each. The test iowait vm on the same host node had only...
Changing the vm.* parameters doesn't helped here. I have performed additional tests trying to check if the fault is in the HBA queue parameters or in the storage iops ... but a simple test proved me that it is entirely VM virtio/io fault. Creating a new lvm locally on the one of the nodes...
Thank you gkovacs for you suggestions. Definitely I would give it a try with the recommended parameters by you and will report back when I have the results.
This problem is really weird for me - are we all using something exotic in common? Virtio disks on shared storage - I doubt it - it looks...
The issue is not gone. It was mitigated by moving this particular VM to the local storage of one of the host nodes in the cluster. All other VMs are running on the shared storage. So the local disk is idle and there is no chance for any io wait.
The thing is that this VM (it is a firewall...
Hi, I think we are experiencing exactly the same issue here:
https://forum.proxmox.com/threads/3000-msec-ping-and-packet-drops-with-virtio-under-load.36687/
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.