[SOLVED] Trying to replace a "failed" drive in existing raid

BobParr222

Member
Mar 14, 2022
14
2
8
79
I may have done things wrong thinking that just putting in a new drive would allow the RAID to rebuild itself.

That said, I have a 5 disk RAID, but only 4 disks are still recognized. I have the 5th disk plugged in, but cannot get it to be seen so it can be added to the existing RAID so it can be rebuilt. This is also causing the associated VM to not boot.

"could not activate storage 'Z5', zfs error: cannot import 'Z5': no such pool available (500)"

Partition currently shows as LVM, but cannot find out

How do I get my new disk to be setup/configured to be seen by the existing zpool so that the RAID will rebuild and I can then power up my VM?

See attached files for further info about the environment. Please let me know if you need more commands run.

Thanks!
 

Attachments

  • lsblk.txt
    2 KB · Views: 5
  • pveversion -v.txt
    1.4 KB · Views: 1
  • zpool status.txt
    1.4 KB · Views: 4
  • zpool import.txt
    1.1 KB · Views: 4
Your storage makes no sense.
You got 4 single disk ZFS pools (and probably the fifth single disk ZFS pool missing) and then you created one zvol on each of those single disk pool.
Then you are using these 5 zvols to create another pool, a raidz1. You you are basically running ZFS on top of ZFS, which is a bad idea because of amplifying overhead and additional capacity loss (as a zfs pool shouldn't be filled up more than 80%, so you only get 80% of 80% so 64%).

I would back up that data, then destroy everything and start from scratch. Creating a raidz1 that uses the 5 physical disks and not zvols stored on single disk pools.
 
Last edited:
Your storage makes no sense.
You got 4 single disk ZFS pools (and probably the fifth single disk ZFS pool missing) and then you created one zvol on each of those single disk pool.
Then you are using these 5 zvols to create another pool, a raidz1. You you are basically running ZFS on top of ZFS, which is a bad idea because of amplifying overhead and additional capacity loss (as a zfs pool shouldn't be filled up more than 80%, so you only get 80% of 80% so 64%).

I would back up that data, then destroy everything and start from scratch. Creating a raidz1 that uses the 5 physical disks and not zvols stored on single disk pools.
I would love to backup the data and start from scratch, but cannot get the VM to power up. My goal in getting the 5th disk is to get the VM to power up so I can move the data and then do as you suggested - start from scratch. How do I get the VM to power up?

"Error: could not activate storage 'Z5', zfs error: cannot import 'Z5': no such pool available"
 
I guess your VM is a TrueNAS?

Best way to use a TrueNAS VM would be to buy a HBA card, attach those 5 HDDs to that HBA and then use PCI passthrough to passthrough that HBA into the VM and let TrueNAS create that raidz1.
If PCI passthrough is not an option I would use disk passthrough to bring the HDDs into the VM and let TrueNAS create the raidz1.

To boot that VM, go to your VM in the PVE webUI and detach the virtual disk on the missing pool. If PVE won't allow you that, you can comment out that line, defining the failed disk, in your VMs config file (/etc/pve/qemu-server/ folder). With that failed virtual disk removed PVE shouldn't complain and start that VM and the raidz1 should still be usable because 4 of 5 disks are still there.
 
  • Like
Reactions: BobParr222
I guess your VM is a TrueNAS?

Best way to use a TrueNAS VM would be to buy a HBA card, attach those 5 HDDs to that HBA and then use PCI passthrough to passthrough that HBA into the VM and let TrueNAS create that raidz1.
If PCI passthrough is not an option I would use disk passthrough to bring the HDDs into the VM and let TrueNAS create the raidz1.

To boot that VM, go to your VM in the PVE webUI and detach the virtual disk on the missing pool. If PVE won't allow you that, you can comment out that line, defining the failed disk, in your VMs config file (/etc/pve/qemu-server/ folder). With that failed virtual disk removed PVE shouldn't complain and start that VM and the raidz1 should still be usable because 4 of 5 disks are still there.
Yes, the VM is a TrueNAS. I couldn't get the webUI to detach the virtual disk. However, your suggestion of commenting out the line for the 5th disk worked perfectly, and the VM is now booted up - I am currently copying data from that VM to another location. Once the copy is complete, I will start my PVE from scratch (with hopefully a better understanding of ZFS storage setup). So, THANK YOU!!!

I do have an HBA card, but it seems to ruin disks (any drive in a specific slot would start throwing a massive amount of disk sector errors, leading me to think the drive was failing). Since I have 6 SATA slots on my motherboard I used one for the OS and the other five for a five-disk software RAID. I then chose to use Proxmox to virtualize everything. Proxmox documentation gave me the impression that it was preferable to handle the disks in a software RAID as opposed to hardware RAID, hence the [rookie] setup I used.

Thanks again! Editing the VM config file (/etc/pve/qemu-server/[config.file]) did the trick to get me back up and running again.
 
  • Like
Reactions: BobParr222

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!