This is my first true go around with using a zfs pool as my storage backend.
I recently installed proxmox on a dell r420 because I got it for free literally. I flashed the HBA with IT mode (H710P Mini) and created a zfs pool with 8 disks (2TB SSD) in RAID10. All disks were purchased NIB. I have proxmox installed on a separate 256GB SSD.
Setup went fine and I chugged along. Now the status is showing degraded and 2 of the disks are not recognized. The data is still in-tact but I was wondering if I can hotswap the two disks in question?
I don't believe it is the disks themselves so I was hoping maybe I can somehow remove the disks in question, clean (delete partitions) in another system (linux live) and then re-introduce them back into the pool.
Otherwise I believe I may need to recreate the whole pool altogether with a bit more features enable/disabled to leverage longevity of the SSD's.
Thanks for any insight.
I recently installed proxmox on a dell r420 because I got it for free literally. I flashed the HBA with IT mode (H710P Mini) and created a zfs pool with 8 disks (2TB SSD) in RAID10. All disks were purchased NIB. I have proxmox installed on a separate 256GB SSD.
Setup went fine and I chugged along. Now the status is showing degraded and 2 of the disks are not recognized. The data is still in-tact but I was wondering if I can hotswap the two disks in question?
I don't believe it is the disks themselves so I was hoping maybe I can somehow remove the disks in question, clean (delete partitions) in another system (linux live) and then re-introduce them back into the pool.
Otherwise I believe I may need to recreate the whole pool altogether with a bit more features enable/disabled to leverage longevity of the SSD's.
Thanks for any insight.
Last edited: