Bad drive in RAIDZ1-0 data pool preventing boot?

mansanram

New Member
Jun 22, 2024
2
0
1
Hello all,

This question is to satisfy my curiosity as the system is up and running. I'm running proxmox in my home lab on a Dell R540, I have 3 raidz pools, 2x storage, and 1x for root/system. One of the drives in my 1st storage pool crapped out, and after that whenever I attempted to reboot the server, it would stay frozen at the "Welcome to GRUB" prompt, removing the drive fixes this, and now I'll just have to deal with a degraded pool until I replace that drive. My question is why would a drive that has no system files on it cause this? This pool has things related to most of my containers, and VMs, so when I booted up with all member drives removed I expected more than half of my container/vms to not start, but still unsure about the whole freezing at GRUB screen.

Apologies if this has been answered somewhere, but my google-fu is lacking here. Every time I tried to search I ended up writing an essay; figure I might as well ask here, thanks.
 
Once more one more of the zfs unfortune features to see here. In theory 1 disk could fail in raidz1 ... and even all pools should be mounted ... but even if not because of additional errors in that degraded raidz the zfs kernel module should not block the boot process for.
For a filesystem that claims to be the best still a sad state of affairs.
 
Well I'm just happy there was no data loss, nothing is super important that's on there, but it would be a huge pain to have to reacquire everything. I'm slightly more used to BTRFS, but figured I should give ZFS a shot, if nothing else to gain some familiarity.