Buffer I/O error on VM after reinstall

Adam_

Member
May 23, 2019
3
0
21
123
Hello there!

I've been using my Proxmox v6 instance for the last 6 years - 24/7. Everything was working perfectly. Recently, after the reboot Proxmox have not booted up. The disk (SSD) was corrupted, had bad blocks and generally dead. I have replaced the disk and installed a fresh instance (v8). As my configuration is based on network location of VMs I figured that it would be a walk in a park to get everything going again.

So it was - I have installed Proxmox v8, added CIFS storage, created my 3 vms, assigned new hard drives from network - everything went smoothly but every machine was getting Buffer I/O errors. I have red some threads (https://forum.proxmox.com/threads/vm-i-o-errors-on-all-disks.94259/) which gave me a hint to go back to the Proxmox v6, which is the last available iso (I don't remember the original version).

Now I'm on Proxmox v6 and two out of three VMs are still getting Buffer I/O errors. Unfortunately I don't have my original config, but I have tried every possible combination of Virto/SCSI setting. What is more interesting - I can't get past fresh Ubuntu 22 installation on new VM that I have created. Installer stops during partition creation.

- VM that works ok is Ubuntu 18, the others that get the errors are Ubuntu 20 and Ubuntu 22.
- Hardware is HP Microserver Gen8 with RAID0 disk (RAID1 is not a real RAID1 and does not boot after Proxmox installation)
- There is plenty of space on network location - it is a proper ZFS NAS.

Any hint is greatly appreciated!

Screenshot 2023-12-23 at 09.21.27.png

Code:
root@cloud:~# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.4-4 (running version: 6.4-4/337d6701)
pve-kernel-5.4: 6.4-1
pve-kernel-helper: 6.4-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.4-1
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-2
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-1
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-3
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-1
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1

Code:
root@cloud:~# cat /etc/pve/qemu-server/101.conf 
boot: dcn
bootdisk: scsi0
cores: 1
ide2: none,media=cdrom
memory: 2000
name: HOMEBRIDGE
net0: e1000=BE:A1:5A:DC:E1:48,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: ISO:101/vm-101-disk-1.qcow2,discard=on,size=15G
smbios1: uuid=e337c15a-a5e7-4c3c-920b-095f9fa21d36
sockets: 1
vmgenid: a0527453-e96d-4328-810d-822a1d4f4cf5


root@cloud:~# cat /etc/pve/qemu-server/102.conf 
boot: dcn
bootdisk: scsi0
cores: 1
ide2: none,media=cdrom
memory: 1024
name: DB
net0: e1000=FA:C1:2F:75:CC:C5,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: ISO:102/vm-102-disk-0.qcow2,discard=on,size=14G
smbios1: uuid=79ee6310-ed88-4700-b868-75722e84cbdc
sockets: 2
vmgenid: 41a61dfe-1920-49f4-83d0-91d7fe523fe0

root@cloud:~# cat /etc/pve/qemu-server/104.conf 
boot: order=scsi0
cores: 1
memory: 6000
name: NEXTCLOUD
net0: e1000=1E:E1:08:26:CA:44,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: ISO:104/vm-104-disk-0.qcow2,size=140G
smbios1: uuid=dc002271-b79a-4aea-bbea-395deb0b812d
sockets: 2
vmgenid: 146d06c0-b010-47ea-b0a2-b6588ff61a8e

Code:
root@cloud:~# df -h
Filesystem             Size  Used Avail Use% Mounted on
udev                   7.8G     0  7.8G   0% /dev
tmpfs                  1.6G   11M  1.6G   1% /run
/dev/mapper/pve-root    57G  2.4G   51G   5% /
tmpfs                  7.8G   43M  7.7G   1% /dev/shm
tmpfs                  5.0M     0  5.0M   0% /run/lock
tmpfs                  7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/fuse               30M   16K   30M   1% /etc/pve
//192.168.117.111/ISO  2.2T  1.9T  389G  83% /mnt/pve/ISO
tmpfs                  1.6G     0  1.6G   0% /run/user/0
[CODE]
 
For now, I have resolved the issue by moving disks to local lvm. There is no more BUFFER I/O errors and I can install ubuntu on a fresh vm. I guess, I will move to a bigger SSD and make daily backups to CIFS location in case of a similar scenario. Strange situation tho...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!