Raid1 boot array, drive replacement procedure

buckweet1980

New Member
Mar 27, 2024
4
0
1
I'm looking to do a btrfs raid1 boot array, which can be done via the installation, but how in the world do you replace a drive?

Can someone direct me in HOW to replace a failed drive? I've searched and searched with no real answer. The challenge I'm running into is there's no way that I can find to rebuild the mirror once it's degraded.

I've added the rootflags for degraded state, along with the RW flag too into grub. But if you boot from one drive (because a drive has failed for example), it boots into read only state and you can't modify the btrfs mirror..


So how does one do this? Do we have to boot in a live media scenario? Is there a way to get the filesystem mounted in RW so you can make modifications with the Proxmox install?

It's surprising that this isn't documented, at least I can't find any solutions. I'm open to ZFS, but being that these are SSDs, I read that btrfs would be a better approach?

Not sure where to go from here...
 
It's surprising that this isn't documented, at least I can't find any solutions. I'm open to ZFS, but being that these are SSDs, I read that btrfs would be a better approach?

https://pve.proxmox.com/wiki/BTRFS

BTRFS integration is currently a technology preview in Proxmox VE.

If you can use the preview feature, feel free to do so. However, if you cannot handle it yourself (requiring documentation or support), it should not be an option.
Furthermore, there is no basis for claiming Btrfs is superior simply because it's an SSD.

You can start using it by creating a new mirror, removing one drive, adding a new disk, and testing the replacement method.

https://docs.oracle.com/en/operatin...admin/fsadmin-ManagingtheBtrfsFileSystem.html

You say you can't find it, but I think if you look, you'll find similar threads. Isn't it strange to ask for documentation for a feature that isn't officially supported?
 
Last edited:
  • Like
Reactions: Johannes S
https://pve.proxmox.com/wiki/BTRFS

BTRFS integration is currently a technology preview in Proxmox VE.

If you can use the preview feature, feel free to do so. However, if you cannot handle it yourself (requiring documentation or support), it should not be an option.
Furthermore, there is no basis for claiming Btrfs is superior simply because it's an SSD.

You can start using it by creating a new mirror, removing one drive, adding a new disk, and testing the replacement method.

https://docs.oracle.com/en/operatin...admin/fsadmin-ManagingtheBtrfsFileSystem.html
As I stated in my opening post, you can't modify anything because it will only mount it as a read-only filesystem since it's in a degraded state.
 
  • Like
Reactions: Johannes S and UdoB
Replacing a Failed Disk in Proxmox (BTRFS RAID1 – Simple Explanation)

If one disk fails in a Proxmox BTRFS RAID1 setup, your server will still boot from the other disk. That’s normal and expected.

Sometimes it boots in “read-only mode.” This is just a safety feature — your data is not lost.


1. Boot the server normally
If it says “degraded” or mounts read-only, that’s okay.

2. Make the system writable
Run:

mount -o remount,rw /


3. Remove the failed disk from the system

btrfs device remove missing /

Now the system runs on one disk.


4. Replace the physical disk
Shut down, swap the failed SSD with a new one, and boot again.


5. Add the new disk back to the mirror

btrfs device add /dev/sdX /
btrfs balance start -dconvert=raid1 -mconvert=raid1 /


(Replace /dev/sdX with the new disk.)


This rebuilds the mirror.
 
  • Like
Reactions: Onslow