VM I/O errors on all disks

kristian.kirilov

Active Member
Nov 17, 2016
52
1
28
37
Actually I was wrong, because it uses UUID, the UUID remains the same, no matter you are using SCSI or VirtIO.
Will change it and check if the errors still appear.
Many thanks!
 

shukko

Member
Oct 24, 2008
28
3
23

Attachments

  • Screenshot from 2021-11-26 18-30-51.png
    Screenshot from 2021-11-26 18-30-51.png
    52.6 KB · Views: 47

kristian.kirilov

Active Member
Nov 17, 2016
52
1
28
37
Yeah, already did it :) all good.
The issue disappeared!
Many thanks.

Bear in mind, Windows will go BSOD when doing such change. Or at least mine did it ;-)
Luckily I don't have any production Windows OSs.
 

tr_inett

Member
Apr 22, 2014
2
0
21
www.inett.de
I can confirm the problem - after updating to Proxmox 7.1, there were also several Linux VMs in our cluster that were still running with the default SCSI controller (LSI 53C895A) and virtioX and which had "buffer I/O error" or "print_req_error: I/O error" in the log after the update; in one case, data corruption also occurred within the VM.

Changing the SCSI controller to "VirtIO SCSI" and the VM hard disks to "scsiX" also helped here.
 

eds

Member
Aug 17, 2019
6
0
6
121
I had exactly the same problem, with an 2016 kvm machine being migrated to an new PVE 7 host.
The workaround with the SCSI controller seems to work.
Now I will have to find out, what/if data has been lost. I found out about this problem, because my clamav db was corrupted!

Maybe a moderator can pin this post and also add a chapter to the "known issues" when migrating to PVE7??
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0#Known_upgrade_issues
 

Funar

New Member
Oct 8, 2021
3
2
3
49
This might help:
https://gitlab.com/qemu-project/qemu/-/issues/649

I ran into the same I/O errors on an iSCSI-backed Proxmox deployment, but my FibreChannel one seems unaffected. Downgrading back to QEMU 6.0.0 (pve-qemu-kvm 6.0.0-4) will resolve the issue without having to change from virtio to SCSI.

apt install pve-qemu-kvm=6.0.0-4

You'll have to shutdown any VMs you have running and then start them up again for them to be on 6.0.0 after doing this.
 
Last edited:
  • Like
Reactions: kristian.kirilov

kristian.kirilov

Active Member
Nov 17, 2016
52
1
28
37
Thanks @Funar so you can clearly say this is a bug.
Not sure how to make this official, I hope some of the Proxmox developers read the forums.
 

kristian.kirilov

Active Member
Nov 17, 2016
52
1
28
37
I made the test this morning, and it seems to work fine for me. Made the same test as I did it in the past (trying to download gitlab-ce package which is almost 1GB of size). The error messages are not shown anymore.

Because my test is not comprehensive, please the other guys to test as well, and comment here.
Thanks ;-)
 

nhand42

Member
Jan 17, 2019
7
4
8
47
Same issue here with a Debian container. I could reliably get I/O errors when building a development project inside the container. This caused the root partition to remount read-only. Was also using virtio block over ZFS storage. Upgraded the pve-qemu-kvm_6.1.0-3 package linked by Tom and the problems went away.
 
  • Like
Reactions: kristian.kirilov

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!