dead ssd drive on zfs points to initramfs

menelaostrik

Member
Sep 7, 2020
24
0
6
38
I have an ssd drive with failed controller and which was a part of a mirrored zfs vdev. Now it boots into initramfs and says that it can't import pool "rpool". when i issue
zpool import -f rpool

it says that there's no such pool or dataset. However, in "zpool import" can see that rpool has status UNAVAIL because of insufficient replicas. i would like to replace the dead drive with a new one. I have tried:
1) zpool replace 2) zpool offline 3) zpool detach

and everytime i get that there's no such pool or dataset. The only way i got it working was by importing as read only by using:
echo "1" | sudo tee /sys/module/zfs/parameters/zfs_max_missing_tvds zpool import -o readonly=on rpool

screenshot of zpool import: https://prnt.sc/91X8xc3vG1SA
screenshot of zpool status after mounting as read-only: https://prnt.sc/vKzNL0nUFALc

Mounting as read-only boots into the OS. However since it's mounted as readonly i can't actually make any changes to the pool like replacing the damaged drive. Any suggestions?
 
Last edited:
To me "rpool" looks like a stripe/raid0 so no parity. Otherwise there should be a "mirror-0" which is missing.
 
Last edited:
  • Like
Reactions: Neobin
To me "rpool" looks like a stripe/raid0 so no parity. Otherwise there should be a "mirror-0" which is missing.
it's a mirror. I don't know when on initramfs its shown like that.
I have it mounted as read-only and the zpool status output is: https://prnt.sc/vKzNL0nUFALc

if just attaching a new drive isn't possible, is there a way of migrating the VMs in a new pve over the network(samba maybe?)
 
Last edited:
No it's RAID0 also known as stripe. Or as ZFS-people call it: "I hate my data"-mode.

That's why you have 10 data errors that cannot be repaired; it does not have copies for those files. Use zpool status -v rpool (as suggested in the screenshot) to see if your virtual disks are corrupted.
omg. you're totally right.
any chance i could restore from this?
Code:
root@hyperlan:~# zfs list -t snapshot | grep 101
rpool/data/vm-101-disk-0@autosnap_2022-10-03_09:42:51_monthly   855M      -     89.9G  -
rpool/data/vm-101-disk-0@autosnap_2022-10-05_09:55:18_weekly    454M      -     89.9G  -
rpool/data/vm-101-disk-0@autosnap_2022-10-08_00:00:44_daily     380M      -     89.9G  -
rpool/data/vm-101-disk-0@autosnap_2022-10-09_00:00:51_daily     210M      -     90.2G  -
rpool/data/vm-101-disk-0@autosnap_2022-10-10_00:00:19_daily     145M      -     90.2G  -
rpool/data/vm-101-disk-0@autosnap_2022-10-11_11:44:45_weekly      0B      -     90.5G  -
rpool/data/vm-101-disk-0@autosnap_2022-10-11_11:44:45_daily       0B      -     90.5G  -
rpool/data/vm-101-disk-0@autosnap_2022-10-12_00:00:03_daily       0B      -     90.5G  -
rpool/data/vm-101-disk-0@autosnap_2022-10-13_00:00:03_daily     538M      -     90.4G  -
root@hyperlan:~#

I had also setup backups using syncoid long time ago on the same host. i have found this in crontab
0 15 * * * /usr/sbin/syncoid --quiet --no-sync-snap --recursive rpool backup-rpool/rpool
so it should backup rpool daily on an external hdd.
i have connected that hdd on another host but i can't find the backup sets.
shouldn't they appear using zfs list?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!