Proxmox ZFS mirror install with one disk initially?

cfnz

New Member
Feb 12, 2019
7
0
1
51
Hi,

I am trying to build a Proxmox server install running ZFS using one disk initially, then add another disk later (why? - I will be breaking another mirror and using one of its disk to start the Proxmox install, then, once all good, will use the second disk to build the mirror (I don't have any spare disks of the correct size to start from scratch with 2 disks).

To make sure all goes well, I am first testing with a smaller disk setup, using a relatively old simple Dell T110 II server, booting in BIOS mode.

I installed Proxmox with one disk (it partitioned it with gpt and I used zfs raid 0), then after install, I shutdown added the second disk (as I would do when doing it for real). I then mirrored the partitions, copied data from the first 2 smaller partitions, and added the third partition to zfs, and the mirror syncing says it is complete (zfs mirror seemed very quick, seems to only mirror used data on disk unlike mdraid?)

I then pulled the first disk to see if I could boot on the second mirror, but no such luck.

I am not that familiar with gpt, but I installed parted and all partitions seem to have the same sizes and flags. I used dd to copy the two smaller partitions.

I am not sure what is on these smaller partitions (with flags bios_grup and boot, esp). I thought I would be able to mount at least the second esp one as I thought that this was a fat 32 partition?

Booting into an Ubuntu live disk, looking at the partitions, they are all labelled zfs members even though only sda3 and sdb3 show up in zpool status. Not sure why (and I could not figure out how to mount them).

I suspect that on the sdb drive, something in one (or both) of those partitions needs to be changed so it will boot on the second drive?

The summary question I guess is, how do I add a second disk to create a mirror that enables me to be allow it to boot from sda and sdb like I can with mdraid... having a mirror that can't boot when disk sda goes bad won't be that useful :-(, though at least I can get to the big zfs partition to get data off :).

Thanks for any help.
 

cfnz

New Member
Feb 12, 2019
7
0
1
51
Thank-you, just found it in another Google search too...

Didn't realise I still had to do a grub-install /dev/sdb... and that fixed it.

Does anyone know if I need to worry about coping the partitions (sda1 and 2) to the new disk? The linked page does not mention it.

Regards
Colin
 

Sweets

New Member
Jul 31, 2019
2
2
3
19
Sorry to bring back an old thread, when running grub-install I get the following error:

# grub-install /dev/sda
Installing for x86_64-efi platform.
grub-install: error: cannot find EFI directory.

(/dev/sda is the target device)

Proxmox 6.0-1, fresh install, Raid 0 installed to smallest disk.

Anyone else run into this?
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,992
618
133
Sorry to bring back an old thread, when running grub-install I get the following error:

# grub-install /dev/sda
Installing for x86_64-efi platform.
grub-install: error: cannot find EFI directory.

(/dev/sda is the target device)

Proxmox 6.0-1, fresh install, Raid 0 installed to smallest disk.

Anyone else run into this?
we don't use Grub for UEFI with ZFS in PVE 6.x anymore, see https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysboot
 
  • Like
Reactions: ppg

Sweets

New Member
Jul 31, 2019
2
2
3
19
Thanks for the assist. I stumbled a bit because I didnt expect UUID was needed/couldnt be automatically calculated but for the other novices out there, I ended up achieving the mirror via:
sgdisk /dev/sda -R /dev/sdb
sgdisk --randomize-guids /dev/sdb
pve-efiboot-tool format /dev/sdb2 --force
pve-efiboot-tool init /dev/sdb2
zpool attach rpool (sda disk uuid-part3) (sdb disk uuid-part3)

(Note: checked "zpool status -v" which showed rpool as -part3 aka /dev/sda3)
 
Last edited:
  • Like
Reactions: ppg and atomjack

JohnTanner

New Member
Sep 25, 2019
16
2
3
31
Thanks for the assist. I stumbled a bit because I didnt expect UUID was needed/couldnt be automatically calculated but for the other novices out there, I ended up achieving the mirror via:
sgdisk /dev/sda -R /dev/sdb
sgdisk --randomize-guids /dev/sdb
pve-efiboot-tool format /dev/sdb2 --force
pve-efiboot-tool init /dev/sdb2
zpool attach rpool (sda disk uuid-part3) (sdb disk uuid-part3)

(Note: checked "zpool status -v" which showed rpool as -part3 aka /dev/sda3)
Hello,
I'm sorry to bring this thread back up too.

Right now i am facing the same problem as you back then; and at the moment i'm trying to simulate the situation as it's going to be with 2-3 usb sticks.

another breakdown:

start with one drive, add another one later to make a mirror without loosing data on the first drive.
so far i have followed THIS guide (very last post on the site), but i ran into the same error as you have.
Your pve-commands work fine for me, but when i try to execute the zpool attach command, i get the following error:

invalid vdev specification
use '-f' to override the following errors:
/dev/sdc1 contains a filesystem of type 'vfat'

can i just override this error with -f as suggested? Is there anything else i have missed? Or is there a different way?


as additional info:

i prepared the "origin" drive with fdisk: d g w and then created a single drive zfs. the other drive had the same fdisk treatment, all other commands except the zpool attach work flawlessly. i also have tried to just erase the target drive completely, but the same error still occurs. Also weird: zpool can't find the target disk UUID, but the /dev/sdX format is... "findable"(?). still the error as i have pasted above prevails.


can anyone help me? :(
 

guletz

Renowned Member
Apr 19, 2017
1,186
169
68
Brasov, Romania
Also weird: zpool can't find the target disk UUID, but the /dev/sdX format is... "findable"(?). still the error as i have pasted above prevails.
Hi,

Do not use in zfs /dev/sdX. Insted use your hdd sn/wwn . Find them with:

ls -l /dev/disk/by-id

If you use sn then you could even move your hdd's in another server and zfs will be ok. Then if you need to replace a disk is very easy to identify any disk because you have the same sn/wwn printed on the hdd label.
 

JohnTanner

New Member
Sep 25, 2019
16
2
3
31
Hi,

Do not use in zfs /dev/sdX. Insted use your hdd sn/wwn . Find them with:

ls -l /dev/disk/by-id

If you use sn then you could even move your hdd's in another server and zfs will be ok. Then if you need to replace a disk is very easy to identify any disk because you have the same sn/wwn printed on the hdd label.
Thanks for your reply!
i read that it‘s better to do that, but the path wasn’t given anywhere, so thanks for clarifying that ^^

do you maybe have an idea for the vfat error too? Is it harmful or can it be ignored? Or maybe could you just point me to an article that talks about this? I could not find anything about it :/
 

guletz

Renowned Member
Apr 19, 2017
1,186
169
68
Brasov, Romania
Hi,

I gues, that when you have create the partition on that device you have select as a part type to be vfat insted of zfs. After that you have format it with vfat/fat32. You could try to change from vfat in zfs then try again.

Also -f will be ok. The error from zfs is to prevent a mistake from the user: hey, we have already a FS here, if you really want to go forward use -f

Good luck / Bafta
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!