For a home install, I salvaged two SATA SSDs from a another machine and one device doesn't come up every time. I have a couple of SATA M.2 drives of the "same" (same advertised size) size I can use so I'm thinking about just moving to those. Reading previous threads here and https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_zfs_change_failed_dev I think I understand the procedure, but wanted to lay it out and get a check before I proceed:
Current situation is I installed 8.3.1 fresh on an UEFI system on these two drives. I used ZFS, and the installer did the expected three partition thing.
NOTE: I'm using sda/sdb here as shorthand, I'll actually use the `/dev/disk/by-id` names for these devices in the commands. Here are the drives:
Once the above is done, I will:
Current situation is I installed 8.3.1 fresh on an UEFI system on these two drives. I used ZFS, and the installer did the expected three partition thing.
NOTE: I'm using sda/sdb here as shorthand, I'll actually use the `/dev/disk/by-id` names for these devices in the commands. Here are the drives:
- sda - current sketchy boot / rpool drive
- sdb - current healthy boot/ rpool drive
- newA - new M.2 drive
- newB - new M.2 drive
Bash:
sgdisk sdb -R newA
sgdisk -G newA
sgdisk sdb -R newB
sgdisk -G newB
zpool attach -f rpool newA_part3
zpool attach -f rpool newB_part3
proxmox-boot-tool format newA_part2
proxmox-boot-tool init newA_part2 [grub]
proxmox-boot-tool format newB_part2
proxmox-boot-tool init newB_part2 [grub]
- Shutdown and from BIOS force a boot from newA then newB.
- Shutdown again pull the sda, sdb from the box
- Boot, let the boot process do what it wants
- Remove sda, sdb from rpool
- Use proxmox-boot-tool clean --dry-run to confirm what it will do, then do it for real to remove the entries from the now disconnected sda and sdb.