Hello everybody!
I had an strange situation this weekend and was wondering if there would be a "right" way to to what I did.
I have an server with only one SSD as boot drive, which was installed with ZFS by the proxmox installer. This was arround 6 months ago. This weekend, I added an second SSD with the same size to the Server and put it into the ZFS-pool with
Now the old drive has an partition table like this:
And the new one looked like this:
I did not look at this when I did it because it worked without errors.
Then I started the PVE6 Upgrade.
During the
And prompted me with all my installed disks. I then checked both the old and the new drive (/dev/sde and /dev/sdf), but failed on the new drive. When trying only with the old drive, it worked.
I then googled and found an kind of hacky solution: See here
I did as described, and now my partition table looks like this:
It boots fine, but does not look right. I have the bad feeling that it could fail at any time...
So now my question is: What would be the right way to attach a new harddrive to the a single boot drive with ZFS in proxmox. Do I have to be concerned about the current setup? Should I reformat the new disk and resilver from scratch?
Any advice is wanted and I'm graceful for any comment!
If you need any more info on the setup, I will provide it.
I had an strange situation this weekend and was wondering if there would be a "right" way to to what I did.
I have an server with only one SSD as boot drive, which was installed with ZFS by the proxmox installer. This was arround 6 months ago. This weekend, I added an second SSD with the same size to the Server and put it into the ZFS-pool with
zfs attach rpool /dev/sdd3 /dev/sdf
. Proxmox Version was still v5.4Now the old drive has an partition table like this:
Code:
Device Start End Sectors Size Type
/dev/sde1 34 2047 2014 1007K BIOS boot
/dev/sde2 2048 1050623 1048576 512M EFI System
/dev/sde3 1050624 468877278 467826655 223.1G Solaris /usr & Apple ZFS
And the new one looked like this:
Code:
Device Start End Sectors Size Type
/dev/sdf1 2048 488380415 488378368 232.9G Solaris /usr & Apple ZFS
/dev/sdf9 488380416 488396799 16384 8M Solaris reserved 1
I did not look at this when I did it because it worked without errors.
Then I started the PVE6 Upgrade.
During the
apt dist-upgrade
, I ran into an error from grub:
Code:
The GRUB boot loader was previously installed to a disk that is no longer present, or whose unique identifier has changed for some reason.
It is important to make sure that the installed GRUB core image stays in sync with GRUB modules and grub.cfg.
Please check again to make sure that GRUB is written to the appropriate boot devices.
If you're unsure which drive is designated as boot drive by your BIOS, it is often a good idea to install GRUB to all of them.
Note: it is possible to install GRUB to partition boot records as well, and some appropriate partitions are offered here.
However, this forces GRUB to use the blocklist mechanism, which makes it less reliable, and therefore is not recommended.
I then googled and found an kind of hacky solution: See here
I did as described, and now my partition table looks like this:
Code:
Device Start End Sectors Size Type
/dev/sdf1 2048 488380415 488378368 232.9G Solaris /usr & Apple ZFS
/dev/sdf2 512 2047 1536 768K BIOS boot
/dev/sdf9 488380416 488396799 16384 8M Solaris reserved 1
It boots fine, but does not look right. I have the bad feeling that it could fail at any time...
So now my question is: What would be the right way to attach a new harddrive to the a single boot drive with ZFS in proxmox. Do I have to be concerned about the current setup? Should I reformat the new disk and resilver from scratch?
Any advice is wanted and I'm graceful for any comment!
If you need any more info on the setup, I will provide it.
Last edited: