System did not find BTRFS root after removing one disk

fowr0yl

Member
Mar 15, 2022
47
8
13
63
Braunschweig
Hi,
I have installed pve 7.1 with btrfs mirror using 2 SSD's. During installation these SSD's are /dev/sda and /dev/sdb.
After base Installation I added 3 HDD's with an existing zfs. The device names of the SSD changed to /dev/sdd and /dev/sde.
No problem so far.

To check what is happening when a SSD failed I just removed a sata cable. And oops .....

I got the grub menu as expected, the system starts booting and I got the error message "/dev/sdd3 missing".
Checking the other SSD gives the same experience.
Only when both SSD's are connected an running I can boot the system !!

Thats not what I expect, when I setup a mirrored system :(
 
Ok,
for testing proposes I have added "rootflags=degraded" to the kernel commandline in /boot/grub/grub.cfg.
And it worked es expected.

To make it permanent, I have changed GRUB_CMDLINE_LINUX_DEFAULT="quiet" in /etc/default/grub to GRUB_CMDLINE_LINUX_DEFAULT="rootflags=degraded" now. (and removed "quiet" since I hate this option)

But i feel that this should be done while installation, when choosing a btrfs raid as default file system.

By the way. Is there a easy method to add btrfs root snapshots to proxmox grub?
In manjaro and arch linux I simply had to install an additional package grub-btrfs ....
 
But i feel that this should be done while installation, when choosing a btrfs raid as default file system.
I tend to disagree - while I see that it can help in certain situations (i.e. the server is remote and you don't have easy access to a console during boot - the downside of this is that many users will simply not notice if one of their raid1 disks fails and then have unexpected dataloss if the other has an issue as well

By the way. Is there a easy method to add btrfs root snapshots to proxmox grub?
In manjaro and arch linux I simply had to install an additional package grub-btrfs ....
Not that I know of - grub-btrfs seems not integrated in debian yet - but there's an intent-to-package request:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=941627
 
Ok, I disagree too :)
You still have to watch your system all the time, to get a info about your filesystem and/or hardware, when running proxmox as a server.
There is no difference, regardless if rootflags option is set or not.
Only when running proxmox like a desktop client, with often reboots the missing rootflags option may help....

But I see that btrfs support is still on the way.
For now I have added the kali linux repository and installed grub-btrfs. Worked out of the box. But I'm unhappy again.

The default proxmox installation with btrfs is creating a volume "/". CT's are subvolumes inside /:
Taking a snapshot for the proxmox system / will include all CT's. But when you have to go back to this snapshot you will loose all your changes ins those CT's.
I would prefer to strictly separate the operating system (debian & proxmox) from data like containers/vm's ...
 
sorry to hijack this, but i just tested on my test system the same configuration with BTRFS Raid1, and i'm simulating missing disk. as per above i added "rootflags=degraded" to grub cmd line.
if i disconnect Crucial SDD, Proxmox boots just fine.
if i disconnect Plextor SSD, Proxmox boots but in emergency mode.. what i'm missing?

see pic. below.
Screenshot_2022-07-11_15-46-58.png
etc/fstab is as follow:
Code:
root@proxmoxDC:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
UUID=baa0f1ad-39b7-4e27-b04c-f61f13e35ab5 / btrfs defaults 0 1
UUID=C0A0-D7F7 /boot/efi vfat defaults 0 1
proc /proc proc defaults 0 0
 
Looks like you have only one working ESP partition to boot from (EFI on the Plextor). Look into proxmox-boot-tool to setup multiple ESP partitions, which it will keep in sync. I guess you have to remove the current one from /etc/fstab and let proxmox-boot-tool handle it.
 
thats strange because i installed fresh proxmox.. i think, it should work on default install?
 
UUID=C0A0-D7F7 /boot/efi vfat defaults 0 1
my guess: the UUID of the efi-partition is the one from your Crucial SSD.
so you get into emergency mode since /boot/efi cannot be mounted.
as a stop-gap measure - comment out the /boot/efi line in your /etc/fstab and try continuing to boot

if this works - then keep in mind that you need to adapt this if your "first" ssd dies

I hope this helps!
 
could you please share the actual error-message you get when booting without plextor?
 
OK, i started all from scratch. installed a fresh Proxmox, allowed to boot degraded, commented out /boot/efi, and now it boots with either drive..

BUT, i'm starting to think about all Raid1 - when i booted with one disk, all was fine, then i rebooted, connected second disk back, and now i have read-only filesystem..

so, what you suggest in that case? if one of drive fails, then you should add empty drive only?
or simple forget about RAID at all and do simple BTRFS with regular snapshots?
 
OK, i started all from scratch. installed a fresh Proxmox, allowed to boot degraded, commented out /boot/efi, and now it boots with either drive..

BUT, i'm starting to think about all Raid1 - when i booted with one disk, all was fine, then i rebooted, connected second disk back, and now i have read-only filesystem..

so, what you suggest in that case? if one of drive fails, then you should add empty drive only?
or simple forget about RAID at all and do simple BTRFS with regular snapshots?

I installed PVE with raid1 on btrfs.I have the same symptoms.
Did you do anything else below?
  • add "rootflags=degraded" to the kernel commandline in /boot/grub/grub.cfg.
  • comment out the /boot/efi line in your /etc/fstab
By the way, in case of this symptom, is there a way to boot from the initramfs?
 
Hello *,

Sorry to warm-up that old thread again.
I just ran into a similar problem, with my small Testcluster, still running PVE7.x and rootfs on BTRFS-RAID1, testing an unresponsive disk.

Using [I]rootflags=rw,dergraded[/I] seems a viable option, if you need to bring up your host even with a broken disk.
Maybe just once, to migrate guests with local data to a node in good condition.

Based on Stoiko's Reply #4, i tend to agree that using the rootfs=degraded kernel option for all grub menu entries may be dangerous, because you might miss the fact that you have lost redundancy in case a disk has errors.
Unless you add additional means of notification or errormessages to warn about that. PVE7 apparently does not do this by default, you need to add this for yourself somehow.

The other view is:
it is quite cumbersome to remember the right commands to use at the time of recovery. Because in most cases, the documentation is not located right next to your machine.

My Request (to Proxmox) would be:
At installation/upgrade time, add the kernel commandline to the additional grub boot options, but not to the grub default.
This way, it will stop to boot upon error, but people doing manual intervention may easily select the right grub item from grub's recovery menu.

This would mean modifying a file probably located in /etc/grub.d/ or in /etc/default/grub.d/proxmox-ve.cfg to add such a boot option with the added commandline from above.

You may also need to use the proxmox-boot-tool when booting your node via EFI. Consider using the init and refresh options.

And, as a reminder in case you did not modify grub's kernel commandline as a prevention measure, and a disk fails:
mount -t btrfs -o rw,degraded /dev/sda3 /root <Ctrl-D>
You may need to replace /dev/sda3 with the device carrying non-defect root partition.
Press Ctrl-D to resume the booting process.

Thanks,
Flo
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!