ZFS FAULTED after disk remplacement

Jordan67

New Member
Feb 16, 2024
9
0
1
Hello

One morning on my server, I noticed that my ZFS pool was degraded, 2 SSDs seemed broken, the LEDs on the disks had turned red.

The disks were Samsung 850 Pro 1 TB, no longer finding any on the market so I decided to upgrade to larger Kingston brand disks (DC600M) 1.92 TB.

When I replaced the failed disks and ran the zpool replace commands, the resilvering was done successfully. All my records were ONLINE again.

After a final reboot of the server, my two disks (although new) returned to red, and the status of my ZFS pool was degraded again.

I tried to move the cage location of one of the two disks to see if the problem was not coming from my backplane, but the red LED follows the disk.

I will find it hard to believe that two new disks are faulty

Screenshot from 2024-07-11 15-58-42.png


Screenshot from 2024-07-11 16-12-54.png


The SMART status of the disks is OK, I don't understand why they are considered faulty by ZFS

Thank a lot :)
 
Export the pool and re-import thusly:

zpool import -a -f -d /dev/disk/by-id # or by-path

This is likely happening because your pool is still using short disk names - e.g. sdc
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!