ZFS attach on boot-drive - Is there a right way?

ChESch

Member
Sep 2, 2018
22
2
8
Hello everybody!

I had an strange situation this weekend and was wondering if there would be a "right" way to to what I did.
I have an server with only one SSD as boot drive, which was installed with ZFS by the proxmox installer. This was arround 6 months ago. This weekend, I added an second SSD with the same size to the Server and put it into the ZFS-pool with zfs attach rpool /dev/sdd3 /dev/sdf. Proxmox Version was still v5.4
Now the old drive has an partition table like this:
Code:
Device       Start       End   Sectors   Size Type
/dev/sde1       34      2047      2014  1007K BIOS boot
/dev/sde2     2048   1050623   1048576   512M EFI System
/dev/sde3  1050624 468877278 467826655 223.1G Solaris /usr & Apple ZFS

And the new one looked like this:
Code:
Device         Start       End   Sectors   Size Type
/dev/sdf1       2048 488380415 488378368 232.9G Solaris /usr & Apple ZFS
/dev/sdf9  488380416 488396799     16384     8M Solaris reserved 1

I did not look at this when I did it because it worked without errors.

Then I started the PVE6 Upgrade.
During the apt dist-upgrade, I ran into an error from grub:
Code:
The GRUB boot loader was previously installed to a disk that is no longer present, or whose unique identifier has changed for some reason.
It is important to make sure that the installed GRUB core image stays in sync with GRUB modules and grub.cfg.
Please check again to make sure that GRUB is written to the appropriate boot devices.
If you're unsure which drive is designated as boot drive by your BIOS, it is often a good idea to install GRUB to all of them.
Note: it is possible to install GRUB to partition boot records as well, and some appropriate partitions are offered here.
However, this forces GRUB to use the blocklist mechanism, which makes it less reliable, and therefore is not recommended.
And prompted me with all my installed disks. I then checked both the old and the new drive (/dev/sde and /dev/sdf), but failed on the new drive. When trying only with the old drive, it worked.

I then googled and found an kind of hacky solution: See here
I did as described, and now my partition table looks like this:
Code:
Device         Start       End   Sectors   Size Type
/dev/sdf1       2048 488380415 488378368 232.9G Solaris /usr & Apple ZFS
/dev/sdf2        512      2047      1536   768K BIOS boot
/dev/sdf9  488380416 488396799     16384     8M Solaris reserved 1

It boots fine, but does not look right. I have the bad feeling that it could fail at any time...

So now my question is: What would be the right way to attach a new harddrive to the a single boot drive with ZFS in proxmox. Do I have to be concerned about the current setup? Should I reformat the new disk and resilver from scratch?

Any advice is wanted and I'm graceful for any comment!

If you need any more info on the setup, I will provide it.
 
Last edited:
I have found some posts regarding this problem, see here:
https://forum.proxmox.com/threads/p...all-with-one-disk-initially.53601/post-261614
https://forum.proxmox.com/threads/d...fs-raidz-1-root-file-system.22774/post-208998

I have also written a blog-post (in german) in my blog, see here

One question remains:
One post says you should do a grub-install, the other tells you to use the pve-efiboot-tool

I did it with the grub-install method and it worked, but I am still curious to find out if it makes a difference to use one or both solutions. Maybe it has to do with BIOS vs. UEFI boot?
If I get no answer (again) and i find time, i will try to test it on my own...

An Answer would be very appreciated!
 
Ok, regarding to this document, if you use EFI, you might need the pve-efiboot-tool method, and if you use BIOS, you need grub-install... I will test if both work at the same time, but I see no reason why it shouldn't work.
 
My Google searches led me to this thread. However, it wasn't much help in terms of resolution. My issue was similar, in that I replaced a hard drive and re-added it. I hope that this can help someone else. The following command revealed that the new drive, sdb was partitioned differently than the rest of the drives.

Bash:
root@pbs:~# lsblk -o +FSTYPE
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT FSTYPE
sda      8:0    0 931.5G  0 disk
├─sda1   8:1    0  1007K  0 part
├─sda2   8:2    0   512M  0 part            vfat
└─sda3   8:3    0   931G  0 part            zfs_member
sdb      8:16   0 931.5G  0 disk
├─sdb1   8:17   0 931.5G  0 part            zfs_member
└─sdb9   8:25   0     8M  0 part
...

This process was completed successfully on a Proxmox Backup Server 2:3-1 (latest as of December 16, 2022). The following commands worked for me. Using zpool status Identified the drive.

Bash:
zpool offline rpool ata-WDC_WD10XXXX-XXXXOLD_WD-WXXXXXXXXOLD-part3
sgdisk /dev/sda -R /dev/sdb
sgdisk -G /dev/sdb
zpool replace -f rpool ata-WDC_WD10XXXX-XXXXOLD_WD-WXXXXXXXXOLD-part3 ata-WDC_WD10XXXX-XXXXNEW_WD-WXXXXXXXXNEW-part3

proxmox-boot-tool format /dev/sdb2 --force
proxmox-boot-tool init /dev/sdb2

Basically, take the hard drive offline with zpool offline command as seen above. Then use sgdisk with the -R to copy the partitions from a good hard drive from the zpool. Use sgdisk again to create a unique GUID for the new drive. Lastly, use zpool replace to replace the OLD drive with the partition (-part3) with the new drive with partition (-part3). This may take days (as tested on a 1TB 5400 RPM HDD). The last two commands prepare the second partition of the new drive by formatting it and initiating it to make it bootable.

This command can be used to verify weather or not the proxmox-boot-tool completed successfully.

Code:
root@pbst:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
002B-CEA2 is configured with: uefi (versions: 5.13.19-6-pve, 5.15.53-1-pve, 5.15.74-1-pve)
0018-DABD is configured with: uefi (versions: 5.13.19-6-pve, 5.15.53-1-pve, 5.15.74-1-pve)

Bash:
root@pbs:~# lsblk -o +FSTYPE
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT FSTYPE
sda      8:0    0 931.5G  0 disk
├─sda1   8:1    0  1007K  0 part
├─sda2   8:2    0   512M  0 part            vfat
└─sda3   8:3    0   931G  0 part            zfs_member
sdb      8:16   0 931.5G  0 disk
├─sdb1   8:17   0  1007K  0 part
├─sdb2   8:18   0   512M  0 part            vfat
└─sdb3   8:19   0   931G  0 part            zfs_member
...

Source(s)
 
Last edited:
  • Like
Reactions: Dunuin
An addition:
In case of old PVE installations on hardware with BIOS and grub boot you need to run grub-install /dev/<new disk> instead of...
Code:
proxmox-boot-tool format /dev/<newdisk>2 --force
proxmox-boot-tool init /dev/<newdisk>2
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!