I can confirm the problem - after updating to Proxmox 7.1, there were also several Linux VMs in our cluster that were still running with the default SCSI controller (LSI 53C895A) and virtioX and which had "buffer I/O error" or "print_req_error: I/O error" in the log after the update; in one case, data corruption also occurred within the VM.
Changing the SCSI controller to "VirtIO SCSI" and the VM hard disks to "scsiX" also helped here.
I had exactly the same problem, with an 2016 kvm machine being migrated to an new PVE 7 host.
The workaround with the SCSI controller seems to work.
Now I will have to find out, what/if data has been lost. I found out about this problem, because my clamav db was corrupted!
I ran into the same I/O errors on an iSCSI-backed Proxmox deployment, but my FibreChannel one seems unaffected. Downgrading back to QEMU 6.0.0 (pve-qemu-kvm 6.0.0-4) will resolve the issue without having to change from virtio to SCSI.
apt install pve-qemu-kvm=6.0.0-4
You'll have to shutdown any VMs you have running and then start them up again for them to be on 6.0.0 after doing this.
I made the test this morning, and it seems to work fine for me. Made the same test as I did it in the past (trying to download gitlab-ce package which is almost 1GB of size). The error messages are not shown anymore.
Because my test is not comprehensive, please the other guys to test as well, and comment here.
Same issue here with a Debian container. I could reliably get I/O errors when building a development project inside the container. This caused the root partition to remount read-only. Was also using virtio block over ZFS storage. Upgraded the pve-qemu-kvm_6.1.0-3 package linked by Tom and the problems went away.