Zfs mirror, disk hang during update, taken offline. Whats next?

Dec 26, 2018
138
2
23
35
Hello.

So i just updated one of our servers with zfs in raid 1. One of the disks where REALLY slow, update was stuck for over an hour, the disk had 3 checksum fails in zfs.
Using smartctl took over 10 seconds to post, compaired to the other disk.
Once i took the disk offline in zfs, the update sped up and finnished in 3 minutes.

Rebooted the server, it is now quicker than it used to be (due to the failed/slow disk), boot up just fine.

Now the question, is there anything i need to fix when replacing the disk? Other than these guides i usally use.

https://forum.proxmox.com/threads/disk-replacement-procedure-for-a-zfs-raid-1-install.21356/

https://edmondscommerce.github.io/replacing-failed-drive-in-zfs-zpool-on-proxmox/

Because it says if cant find the grub directory of any one the disks.
1625387116030.png
 
The disks are not showing uuid.
1625662435384.png

But they have the correct partitions, and working from ZFS
1625662478857.png

1625662490706.png

ZFS is working.
1625662594468.png

I replaced both disks before rebooting, one disk i used path, the other /dev/disk/by-id
Both is working, it even changed from /dev/sdg3 to /dev/sdd3 after boot.

So i guess i just need to create new UUID for the boot disks, and then add them to /etc/kernel/proxmox-boot-uuid, and run grup update?
 
I would advise against doing it manually as you suggested.I think you should follow the steps of proxmox-boot-tool format /dev/sda2 and proxmox-boot-tool init /dev/sda2 for both new drives (make sure to pick the right partitions). You can use proxmox-boot-tool clean to remove the old ones.
 
  • Like
Reactions: potetpro
This does not work.
Code:
root@proxmox4:~# proxmox-boot-tool format /dev/sda1
UUID="13726227660537878781" SIZE="1031168" FSTYPE="zfs_member" PARTTYPE="21686148-6449-6e6f-744e-656564454649" PKNAME="sda" MOUNTPOINT=""
E: '/dev/sda1' is too small (<256M).

Also this partition was setup using the Proxmox installer, so this is strange.

EDIT: Nevermind, i targeted the boot partition, not the EFI system.
So used to first partition beeing boot, second swap, and third OS.
Thank god for the safety here.
 
Last edited:
Yes, it is very important to choose the right partitions: the 512MB EFI System. Please note that sda, sdb, sdc and sdd can change from one reboot to another. From what you are showing, I cannot tell if sdd2 is currently part of one of your ZFS pools or maybe it was used before with ZFS and there is some old data on the disk.
if fdisk -l /dev/sdd shows that it is a 512M EFI System partition (and/or gdisk -l /dev/sdd shows 512.0 MiB EF00), then it should be safe to use --force.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!