Applying pve-qemu-kvm 10.2.1-1 may cause extremely high “I/O Delay” and extremely high “I/O pressure stalls”. (Patches in the test repository

Can also can confirm it fixes the issue, but to take effect I had to reboot the VMs (from Proxmox, not from within the VM).
 
I had to reboot the VMs (from Proxmox, not from within the VM).
AFAIK, that is perfectly normal - as the new pve-qemu-kvm will only be applied on a host restart of the kvm, not internally by a VM reboot, which technically does not restart the kvm host-side.
 
I just wondered about this and had these thoughts:

When I migrate a VM from pve-node-1 to pve-node-2 it (and pve-node-2 has the patch) the patch will be applied and the problem is fixed without rebooting the VM from proxmox. Why cant proxmox then "upgrade" or "pass" the VM from the old pve-qemu-kvm to the newly installed pve-qemu-kvm ? The downtime (since internally passed, not through network) should be much lower than when "live-migrating".

This would prevent proxmox from needing to reboot the VMs, which I think would be a cool feature.

Does this make sense and it this possible?
 
Last edited:
When you migrate the VM to another node, the new node is freshly starting the kvm instance (with the new pve-qemu-kvm version) - in fact it is also freshly "booting" the VM - as if it were just now booting - just the state (incl. RAM) has been saved (same as a hibernated system that is restarted).

Theoretically, on the same node - this could also be done by hibernating the VM (saving its state incl. RAM) & then restarting. Not sure what the time gain would be verses a complete clean reboot of the VM, although I guess this will be VM-dependent.
 
  • Like
Reactions: Johannes S
Thank you for making the correction.

I have confirmed that the fix has been applied (the workaround has been implemented).

Since it is unclear when a fundamental fix will be implemented in the kernel, I will mark this thread as resolved in a few weeks if the graph issue does not reoccur.

* Please create a separate thread to report any issues other than graph-related ones. Do not add them to this thread.



If you have a hold on it, please unhold it, as @monkfish said.

//Settings
Code:
apt-mark hold pve-qemu-kvm

//Clear Settings
Code:
apt-mark unhold pve-qemu-kvm
 
Last edited:
I just wondered about this and had these thoughts:

When I migrate a VM from pve-node-1 to pve-node-2 it (and pve-node-2 has the patch) the patch will be applied and the problem is fixed without rebooting the VM from proxmox. Why cant proxmox then "upgrade" or "pass" the VM from the old pve-qemu-kvm to the newly installed pve-qemu-kvm ? The downtime (since internally passed, not through network) should be much lower than when "live-migrating".

This would prevent proxmox from needing to reboot the VMs, which I think would be a cool feature.

Does this make sense and it this possible?
Yes, it's possible, but not yet Implemented in Proxmox VE: https://bugzilla.proxmox.com/show_bug.cgi?id=3303
 
  • Like
Reactions: Johannes S
Yes, it's possible, but not yet Implemented in Proxmox VE
But for the poster's use-case of causing the new pve-qemu-kvm version to be applied/implemented, wouldn't hibernate/resume (or from CLI using qm suspend <vmid> --todisk 1 & then qm resume <vmid> ) not already accomplish this?
 
  • Like
Reactions: Johannes S
But for the poster's use-case of causing the new pve-qemu-kvm version to be applied/implemented, wouldn't hibernate/resume (or from CLI using qm suspend <vmid> --todisk 1 & then qm resume <vmid> ) not already accomplish this?
Yes, this is an alternative for now.
 
  • Like
Reactions: Johannes S