IO error on virtual zfs drive after power loss

SamTzu

Renowned Member
Mar 27, 2009
521
17
83
Helsinki, Finland
sami.mattila.eu
How does one go about fixing zfs problems inside virtual drive?
Should it be done inside the VM? (Might be difficlut if you can't start it.)
or
Should it be done on the host?

What commands do you use?
 
Did you already run a scrub on the PVE host to fix corrupted data? If it can'T fix it because you use a single disk or a stripe I would restore the last backup.
 
Last edited:
Inside the VM:

root@vm2404:~# zpool import
pool: vdd
id: 4588309049495493978
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72
config:

vdd FAULTED corrupted data
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 ONLINE

root@vm2404:~# zpool scrub vdd
cannot open 'vdd': no such pool
 
Last edited:
If it complains bitterly, you could try importing read-only, to rescue the data:
sudo zpool import -f -o readonly=on [poolname] -R /mnt/[poolname]
That's a good tip. I actually tried that but it failed. So far -FX is only thing that seems to work (it's still running.)
Apparantly the X and T are last option only.
So far I have only got one kernel error from zfs.
PANIC: zfs: adding existent segment to range tree.
So far so good. It's good to live in Hope (said the tape-worm.)
 
Oh, that error, yes, that's what I encountered. It would hard-lock the array, resulting in data loss, when the array was subjected to heavy writing.

Ultimately, I had to run the ZFS pool using the rescue option, then copy the entire VM from the pool to a specially bought external HDD chassis... [I had a VM running atop a ZFS mirror, so different situation, but same error.] I also had to ignore the RAM reporting itself as ok after testing, and replaced them with better quality, higher capacity ones. I also inserted a ZIL/Read cache device (enterprise grade) into the ZFS array, against advice, and brought it all under control. It does not run fast, but it runs very safely and without locking. [An enterprise-grade Samsung SSD combined with 8TB EXOS drives in a mirror.]
 
For those of you who will experience ZFS fun when import hang's or crashes... this might help you.
zpool import -o readonly=on -f POOLNAME
After trying dosens of different commands - that was the only command that worked for me... and I had to give that from live (kali) ISO image with latest zfstools installed.
After that I could read the pool and rsync -a subvol/disk to safety.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!