I have recently replaced (technically not replaced, just left all the drives in the machine) my rpool drives with new, larger nvme drives.
The system was using:
Dell Poweredge T410 with PCIe nvme+sata adapter in a zfs mirror for boot. both drives were set up with three partitions, the second as the 512MB EFI partition. Both drives had the firmware on them and showed up with their UUIDs in
I added two each PCIe to nvme adapter and a PNY XLR8 nvme drive to replace the above two. I was cautiously conscious that my system may not support booting from nvme but I ignored that and went on.
I went through the process of sgdisk -R and -G to replicate the partitions and generate UUIDs. I attached the drives to the rpool and let them resilver. I used proxmox boot tool to format and init the new second EFI partition and here's where I got stuck.
After running
Remember, both of the existing drives already have a properly configured EFI partition and work properly and show up in the commands above. One of them is an nvme and has been in use for a while.
After messing around and expanding the mirror to fill the space on the new drives, and a few other seemingly pointless tasks (visiting bios and uefi) and accomplishing almost nothing, I tried the init again and for some reason it worked on one of the new drives!
Here's where I am at now:
but!
(note the above output doesn't match what I get when I get a successful init)
and finally after the above:
running format + init doesn't do anything for /dev/nvme1n1p2 with uuid=5AE5-6CEA.
Questions:
Thanks so much,
fixjunk (with brokejunk)
The system was using:
Dell Poweredge T410 with PCIe nvme+sata adapter in a zfs mirror for boot. both drives were set up with three partitions, the second as the 512MB EFI partition. Both drives had the firmware on them and showed up with their UUIDs in
proxmox-boot-tool status
.I added two each PCIe to nvme adapter and a PNY XLR8 nvme drive to replace the above two. I was cautiously conscious that my system may not support booting from nvme but I ignored that and went on.
I went through the process of sgdisk -R and -G to replicate the partitions and generate UUIDs. I attached the drives to the rpool and let them resilver. I used proxmox boot tool to format and init the new second EFI partition and here's where I got stuck.
After running
Bash:
proxmox-boot-tool format /dev/nvme0n1p2 --force
proxmox-boot-tool init /dev/nvme0n1p2
proxmox-boot-tool format /dev/nvme1n1p2 --force
proxmox-boot-tool init /dev/nvme1n1p2
proxmox-boot-tool refresh
doesn't show the new partitions. nor does proxmox-boot-tool status
.Remember, both of the existing drives already have a properly configured EFI partition and work properly and show up in the commands above. One of them is an nvme and has been in use for a while.
After messing around and expanding the mirror to fill the space on the new drives, and a few other seemingly pointless tasks (visiting bios and uefi) and accomplishing almost nothing, I tried the init again and for some reason it worked on one of the new drives!
Here's where I am at now:
Bash:
user@pve:~ $ sudo blkid | grep vfat
/dev/nvme2n1p2: UUID="A0F7-4413" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="13a9607b-662d-497c-9385-7fc4ac9689e0"
/dev/sdg2: UUID="A0F7-CCEB" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="41a39977-5476-4aa7-b481-c656725bce53"
/dev/nvme0n1p2: UUID="A07E-5995" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="13a9607b-662d-497c-9385-7fc4ac9689e0"
/dev/nvme1n1p2: UUID="5AE5-6CEA" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="6fbce0bf-7710-4b1d-b598-c1ea421b6a59"
Bash:
user@pve:~ $ sudo zpool status rpool
pool: rpool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 00:05:26 with 0 errors on Thu Jan 11 22:17:28 2024
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme1n1p3 ONLINE 0 0 0
nvme0n1p3 ONLINE 0 0 0
errors: No known data errors
Bash:
user@pve:~ $ sudo proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
A07E-5995 is configured with: uefi (versions: 6.2.16-20-pve, 6.5.11-6-pve, 6.5.11-7-pve)
A0F7-4413 is configured with: uefi (versions: 6.2.16-20-pve, 6.5.11-6-pve, 6.5.11-7-pve)
A0F7-CCEB is configured with: uefi (versions: 6.2.16-20-pve, 6.5.11-6-pve, 6.5.11-7-pve)
but!
Bash:
user@pve:~ $ sudo proxmox-boot-tool format /dev/nvme1n1p2 --force
UUID="5AE5-6CEA" SIZE="536870912" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="nvme1n1" MOUNTPOINT=""
Formatting '/dev/nvme1n1p2' as vfat..
mkfs.fat 4.2 (2021-01-31)
Done.
user@pve:~ $ sudo proxmox-boot-tool init /dev/nvme1n1p2
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
UUID="AD82-AB2E" SIZE="536870912" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="nvme1n1" MOUNTPOINT=""
Mounting '/dev/nvme1n1p2' on '/var/tmp/espmounts/AD82-AB2E'.
Installing systemd-boot..
Created "/var/tmp/espmounts/AD82-AB2E/EFI/systemd".
Created "/var/tmp/espmounts/AD82-AB2E/EFI/BOOT".
Created "/var/tmp/espmounts/AD82-AB2E/loader".
Created "/var/tmp/espmounts/AD82-AB2E/loader/entries".
Created "/var/tmp/espmounts/AD82-AB2E/EFI/Linux".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/AD82-AB2E/EFI/systemd/systemd-bootx64.efi".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/AD82-AB2E/EFI/BOOT/BOOTX64.EFI".
Random seed file /var/tmp/espmounts/AD82-AB2E/loader/random-seed successfully written (32 bytes).
Created EFI boot entry "Linux Boot Manager".
and finally after the above:
Bash:
user@pve:~ $ sudo proxmox-boot-tool refresh
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/A07E-5995
Copying kernel and creating boot-entry for 6.2.16-20-pve
Copying kernel and creating boot-entry for 6.5.11-6-pve
Copying kernel and creating boot-entry for 6.5.11-7-pve
Copying and configuring kernels on /dev/disk/by-uuid/A0F7-4413
Copying kernel and creating boot-entry for 6.2.16-20-pve
Copying kernel and creating boot-entry for 6.5.11-6-pve
Copying kernel and creating boot-entry for 6.5.11-7-pve
Copying and configuring kernels on /dev/disk/by-uuid/A0F7-CCEB
Copying kernel and creating boot-entry for 6.2.16-20-pve
Copying kernel and creating boot-entry for 6.5.11-6-pve
Copying kernel and creating boot-entry for 6.5.11-7-pve
user@pve:~ $ sudo proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
A07E-5995 is configured with: uefi (versions: 6.2.16-20-pve, 6.5.11-6-pve, 6.5.11-7-pve)
A0F7-4413 is configured with: uefi (versions: 6.2.16-20-pve, 6.5.11-6-pve, 6.5.11-7-pve)
A0F7-CCEB is configured with: uefi (versions: 6.2.16-20-pve, 6.5.11-6-pve, 6.5.11-7-pve)
running format + init doesn't do anything for /dev/nvme1n1p2 with uuid=5AE5-6CEA.
Questions:
- How do I get proxmox-boot-tool init to work on the final partition?
- Why did it suddenly work on the other new drive?
- Is there a way to clear some kind of cache or retained data and force it?
- is there a manual way to do this without proxmox-boot-tool init that will still be maintained by proxmox-boot-tool?
Thanks so much,
fixjunk (with brokejunk)
Last edited: