[SOLVED] - ZFS Single Disk Failure

chrisj2020

Member
Jul 8, 2020
18
6
8
46
Hi there,

I'm running Proxmox 7, across a bunch of Intel NUC's, each with 2 drives installed. The first drive is an NVME, hosting the proxmox install itself. The secondary is a SATA SSD which is configured for a ZFS volume called zfs-core. This is the ZFS volume configuration across all nodes in the Proxmox Cluster.

One of the disks, in one of the nodes has failed, and I've a replacement on the way. The links I've read assume a multi-disk ZFS volume, whereas mine is a single disk setup. I'm really struggling with the replacement steps I need to follow. Can someone help me plan for this with steps / commands I should run?

Cheers,
Chris
 
I have not configured it so far. The same zfs volume is defined at the cluster level. But I don’t have any vms being replicated yet
 
All sorted. Easier than I expected. I found the command below failed because as each node only has a single disk, the zfs volume (called zfs-core) was no longer available on the cluster node in question. So I did the following:

1. Shutdown and replace the failed SSD/HDD. Start the Cluster node back up.
2. Browse to the node in question, and click Disks. This shows the disks attached - showing the new disk.
3. Select "Disks" (in my case /dev/sda/) and click "Initiailize Disk with GPT"
4. Select "ZFS" under "Disks", then "Create: ZFS"
5. Add in the old zfs name (zfs-core in my case), then select your new disk, and de-select "Add Storage"

Thats it - the volume has come back up, and joined the cluster volume assigned to the Proxmox Cluster, and I'm able to move VM/CT between nodes again.