I wanted to update my zfs pool from some sata ssd to some nvme drives. I followed a guide the took me through the steps of replicating the partitions,getting new guids, formating the boot partitions, and initializing the boot partitions. using the following commands
sgdisk --replicate=/dev/sdb (new empty disk) /dev/sda (existing disk with data on it)
sgdisk --randomize-guids /dev/sdb (new disk)
pve-efiboot-tool format /dev/sdb2 --force (new disk)
pve-efiboot-tool init /dev/sdb2 (new disk)
zpool replace -f <pool> <old zfs partition> <new zfs partition>
After trying this it turns out my Dell T440 can only boot from NVME if you use their BOSS-S1 card. I reverted back and wiped all the NVME drives now I'm running into the issue where during boot it is still trying to boot from the NVME but they are not there. This results in getting a bunch of boot failures before it finally make its way through the list to the SSD. Is there a way to clear out the removed boot partitions from the grub list? When I run
I followed what it said and did a
sgdisk --replicate=/dev/sdb (new empty disk) /dev/sda (existing disk with data on it)
sgdisk --randomize-guids /dev/sdb (new disk)
pve-efiboot-tool format /dev/sdb2 --force (new disk)
pve-efiboot-tool init /dev/sdb2 (new disk)
zpool replace -f <pool> <old zfs partition> <new zfs partition>
After trying this it turns out my Dell T440 can only boot from NVME if you use their BOSS-S1 card. I reverted back and wiped all the NVME drives now I'm running into the issue where during boot it is still trying to boot from the NVME but they are not there. This results in getting a bunch of boot failures before it finally make its way through the list to the SSD. Is there a way to clear out the removed boot partitions from the grub list? When I run
proxmox-boot-tools status
I get this output
Code:
root@MMCI-HV-PVE:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
WARN: /dev/disk/by-uuid/536D-5616 does not exist - clean '/etc/kernel/proxmox-boot-uuids'! - skipping
WARN: /dev/disk/by-uuid/53E3-2200 does not exist - clean '/etc/kernel/proxmox-boot-uuids'! - skipping
WARN: /dev/disk/by-uuid/5439-9C0F does not exist - clean '/etc/kernel/proxmox-boot-uuids'! - skipping
WARN: /dev/disk/by-uuid/54B5-ED27 does not exist - clean '/etc/kernel/proxmox-boot-uuids'! - skipping
WARN: /dev/disk/by-uuid/6BD4-8912 does not exist - clean '/etc/kernel/proxmox-boot-uuids'! - skipping
WARN: /dev/disk/by-uuid/6C6E-DFAB does not exist - clean '/etc/kernel/proxmox-boot-uuids'! - skipping
EFEC-BE47 is configured with: uefi (versions: 6.8.12-2-pve, 6.8.12-4-pve)
EFF4-F6CD is configured with: uefi (versions: 6.8.12-2-pve, 6.8.12-4-pve)
EFF5-732A is configured with: uefi (versions: 6.8.12-2-pve, 6.8.12-4-pve)
EFF5-F123 is configured with: uefi (versions: 6.8.12-2-pve, 6.8.12-4-pve)
I followed what it said and did a
nano /etc/kernel/proxmox-boot-uuids
removed the unused uuid's then ran a proxmox-boot-tool refresh
. When I check proxmox-boot-tools status
I get only a list of the good uuid's, but when I boot I still get a bunch of boot failures. I did not use to do this before I tried switching out the NVME. Does anyone know how to clear these out?