BTRFS raid 10 pulled wrong drive

sarsenal

Renowned Member
Mar 5, 2016
25
3
68
51
I was working late and mistakenly misread a bad disk, leading me to accidentally remove /dev/sda3 of the BTRFS raid 10. It was a painful mistake! However, I managed to boot off the PVE iso and mount the OS in read-only mode to scrub the system. After that, I rewrote grub to the disk and was able to breathe a sigh of relief as the system was able to boot up again.

However, I want to ensure that BTRFS repairs the filesystem and guarantees that everything is working correctly again. As I am running PVE 7.3.6, I would like to know the best method for dealing with this kind of situation. What should be my next steps?
 
It improved it a little. Still a few errors. I will finish moving to SSD and replacing a server. Then I will just re-install the node.

Does any one know if i use the proxmox backup scripts and restore the node from the backups configs, does it work to pick-up the ceph disks? I assume it should and don't see why it wouldn't since the configs are all put back.