Qcow disk corruption

Apr 9, 2018
15
0
21
46
Hi

I just has a weird issue on a Proxmox 5 cluster. It's been running well with a Synology NAS on NFS in sync mode for a while now. Last week, we had a VM that no longer started after reboot, it said no boot device. When I attached a livecd, fdisk did not see partitions any more. We had the issue on 2 VM's total since.

I managed to restore one of them by recovering the partition table and reinstalling grub. The scary thing is that we had 4 snapshot backups going back a week that all had the same issue.

qemu-img showed the qcow2 files to be consistent.

Any idea what could have caused this? I'm suspecting the snapshot backups themselves as the cause...

EDIT: The affected VM's where running CentOS 7 and Debian 9.
 
Last edited:
Hi,

what do you use as VM IO BUS for these VM's?
Only virtio or scsi with virtio-bus controller is recommended.

Is your NAS OK?
HW defect?
 
The BUS is set to Virtio scsi BUT I just noticed that some of the VMs including the two that had issues had their disks connected as SATA (I did not set them up myself).
Is that likely the cause? They've been running like this for quite some time before the issue happened.

The NAS reports no errors and only two VM's affected. I suspected the storage too but that would show in the check of the qcow file? The NAS and disks are only 5-6 months old.
 
Sata, IDE can corrupt the image if the shutdown was not gracefull.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!