Any way to rebuild missing EFI volume from a ZFS mirrored pair?

victorhooi

Member
Apr 3, 2018
249
18
23
35
Hi,

OK, I have a Proxmox 6.4 cluster I set up.

The boot volume is a mirrored ZFS setup, with 2 x 1TB disks (/dev/nvme1n1 and /dev/nvme5n1).

I fat-fingered it, and accidentally ran ceph-volume lvm zap on one of the two disks (/dev/nvme5n1) accidentally *sad face*.

The ZFS volume itself on the disk seems to be intact - ceph-volume hit a resource busy issue.

Code:
--> Zapping: /dev/nvme5n1
Running command: /usr/bin/dd if=/dev/zero of=/dev/nvme5n1p2 bs=1M count=10 conv=fsync
 stderr: 10+0 records in
10+0 records out
 stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.00988992 s, 1.1 GB/s
--> Destroying partition since --destroy was used: /dev/nvme5n1p2
Running command: /usr/sbin/parted /dev/nvme5n1 --script -- rm 2
 stderr: wipefs: error: /dev/nvme5n1p3: probing initialization failed: Device or resource busy
--> failed to wipefs device, will try again to workaround probable race condition
 stderr: wipefs: error: /dev/nvme5n1p3: probing initialization failed: Device or resource busy
--> failed to wipefs device, will try again to workaround probable race condition
 stderr: wipefs: error: /dev/nvme5n1p3: probing initialization failed: Device or resource busy
--> failed to wipefs device, will try again to workaround probable race condition
 stderr: wipefs: error: /dev/nvme5n1p3: probing initialization failed: Device or resource busy
--> failed to wipefs device, will try again to workaround probable race condition
 stderr: wipefs: error: /dev/nvme5n1p3: probing initialization failed: Device or resource busy
--> failed to wipefs device, will try again to workaround probable race condition
 stderr: wipefs: error: /dev/nvme5n1p3: probing initialization failed: Device or resource busy
--> failed to wipefs device, will try again to workaround probable race condition
 stderr: wipefs: error: /dev/nvme5n1p3: probing initialization failed: Device or resource busy
--> failed to wipefs device, will try again to workaround probable race condition
 stderr: wipefs: error: /dev/nvme5n1p3: probing initialization failed: Device or resource busy
--> failed to wipefs device, will try again to workaround probable race condition
-->  RuntimeError: could not complete wipefs on device: /dev/nvme5n1p3

However, I've noticed that the EFI volume (/dev/nvme5n1p2) from that disk (/dev/nvme5n1) is gone:

Screen Shot 2021-06-23 at 11.26.34 pm.png

Firstly - what is the impact of this?

And secondly - is there any way to restore/rebuild this EFI volume please?

Thanks,
Victor
 

Attachments

  • Screen Shot 2021-06-23 at 11.25.56 pm.png
    Screen Shot 2021-06-23 at 11.25.56 pm.png
    114.3 KB · Views: 2

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
6,968
1,080
164
Firstly - what is the impact of this?
This depends on the exact setup of the zpool - if you used the PVE installer and created a RAID1 (mirrored vdev) you should be fine.

* I'd first scrub the zpool to check if the data is still allright (it should repair issues found)
* Afterwards check if the partitions are still allright (fdisk/parted) - compare the partitions to the ones on nvme1n1

the reference documentation on ZFS administration should have all necessary steps:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_zfs_administration

I hope this helps!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!