proxmox-boot-tool status shows one entry in raid1 system

leonidas_o

Member
Apr 17, 2022
68
5
8
One of my disks (raid 1) had to be replaced. Following the docs, I could make the new disk work. A host system reboot also works BUT when I execute proxmox-boot-tool status, I get the following:

Code:
proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
3272-7F43 is configured with: grub (versions: 5.15.111-1-pve, 5.15.116-1-pve)

Doing that on my other Proxmox host, it shows two UID entries with grub version.
Code:
proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
A575-C889 is configured with: grub (versions: 5.15.111-1-pve, 5.15.116-1-pve)
A575-FCFE is configured with: grub (versions: 5.15.111-1-pve, 5.15.116-1-pve)


Last steps after waiting for the disk to be resilvered, I executed:
Code:
efibootmgr -v
EFI variables are not supported on this system.


zpool status -v
...


grub-install /dev/nvme0n1
grub-install is disabled because this system is booted via proxmox-boot-tool, if you really need to run it, run /usr/sbin/grub-install.real


/usr/sbin/grub-install.real /dev/nvme0n1
Installing for i386-pc platform.
Installation finished. No error reported.


update-initramfs -u
update-initramfs: Generating /boot/initrd.img-5.15.116-1-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
WARN: /dev/disk/by-uuid/3272-2232 does not exist - clean '/etc/kernel/proxmox-boot-uuids'! - skipping
Copying and configuring kernels on /dev/disk/by-uuid/3272-7F43
    Copying kernel 5.15.111-1-pve
    Copying kernel 5.15.116-1-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.15.116-1-pve
Found initrd image: /boot/initrd.img-5.15.116-1-pve
Found linux image: /boot/vmlinuz-5.15.111-1-pve
Found initrd image: /boot/initrd.img-5.15.111-1-pve
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
done


proxmox-boot-tool clean
Checking whether ESP '3272-2232' exists.. Not found!
Checking whether ESP '3272-7F43' exists.. Found!
Sorting and removing duplicate ESPs..


proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
3272-7F43 is configured with: grub (versions: 5.15.111-1-pve, 5.15.116-1-pve)

nvme0n1 is the new replaced disk, whereas nvme1n1 is the old, healthy disk.
Something doesn't look right here, the status command should show both entries like on the other host, right? The grub-install command executed successfully without any errors, am I missing something?
 
Has no one experienced the same issue or can someone confirm that I definitely need to have two entries? Could the proxmox-boot-tool status tool be a bit buggy?
 
Your old working disk 3272-7F43 has the partitions?
partition 1 BIOS boot
partition 2 EFI system
partition 3 Solaris/ZFS

new disk is the same partitions?
 
@milew yes, done everything like described in the docs:

Code:
fdisk -lu
Disk /dev/nvme0n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: SAMSUNG MZVL2512HCJQ-00B00             
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: FD2B8845-1315-418C-B84B-0B941D4BB001

Device           Start        End   Sectors   Size Type
/dev/nvme0n1p1      34       2047      2014  1007K BIOS boot
/dev/nvme0n1p2    2048    1050623   1048576   512M EFI System
/dev/nvme0n1p3 1050624 1000215182 999164559 476.4G Solaris /usr & Apple ZFS


Disk /dev/nvme1n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: SAMSUNG MZVLB512HAJQ-00000             
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 2CEDCDDC-E453-4D8D-8647-801941E9BC21

Device           Start        End   Sectors   Size Type
/dev/nvme1n1p1      34       2047      2014  1007K BIOS boot
/dev/nvme1n1p2    2048    1050623   1048576   512M EFI System
/dev/nvme1n1p3 1050624 1000215182 999164559 476.4G Solaris /usr & Apple ZFS

Code:
lsblk

nvme0n1     259:0    0 476.9G  0 disk
├─nvme0n1p1 259:1    0  1007K  0 part
├─nvme0n1p2 259:2    0   512M  0 part
└─nvme0n1p3 259:3    0 476.4G  0 part
nvme1n1     259:4    0 476.9G  0 disk
├─nvme1n1p1 259:5    0  1007K  0 part
├─nvme1n1p2 259:6    0   512M  0 part
└─nvme1n1p3 259:7    0 476.4G  0 part

so commands like:
Code:
sgdisk /dev/nvme1n1 -R /dev/nvme0n1
sgdisk -G /dev/nvme0n1
ls -l /dev/disk/by-id/*
zpool replace -f rpool /dev/disk/by-id/...
zpool status -v

Code:
 pool: rpool
 state: ONLINE
  scan: resilvered 135G in 00:04:21 with 0 errors on Sun Nov 19 12:08:00 2023
config:

    NAME                                 STATE     READ WRITE CKSUM
    rpool                                ONLINE       0     0     0
      mirror-0                           ONLINE       0     0     0
        nvme-eui.002538ba1100d1bf-part3  ONLINE       0     0     0
        nvme-eui.0025388b91d66706-part3  ONLINE       0     0     0

errors: No known data errors

And the commands from the initial post for Grub, see above.
 
My system mirror 2 disks
Code:
proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
4011-8644 is configured with: uefi (versions: 6.2.16-10-pve, 6.2.16-3-pve)
4013-7258 is configured with: uefi (versions: 6.2.16-10-pve, 6.2.16-3-pve)
root@pve4:~# blkid
/dev/sdb2: UUID="4013-7258" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="67839cf4-2eb7-4db2-a6de-c7975b5a8356"
/dev/sdb3: LABEL="rpool" UUID="9000620468298269184" UUID_SUB="8278260747013338253" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="dfc88c32-7ea7-4545-a37a-381141fd7443"
/dev/sda2: UUID="4011-8644" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="ee7784b4-e60f-41ce-9f3b-93309e2925b6"
/dev/sda3: LABEL="rpool" UUID="9000620468298269184" UUID_SUB="2107928158377139002" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="70db7ab4-6e91-4b79-aaf0-96ddf144be34"
/dev/sdb1: PARTUUID="28542cfd-d4ea-4ab7-8e86-7d16088c86f0"
 
did you also run proxmox-boot-tool format and init, like the docs say?
 
maybe you copy dd data uefi from old disk to new disk and UUID is the same. This will work too
Verify this:
lsblk -o +UUID
blkid

if you want change this, use
proxmox-boot-tool format /dev/sdX2
format set new UUID
proxmox-boot-tool init /dev/sdX2 (add entries)
proxmox-boot-tool clean (removes bad entries)
 
@fabian no, haven't run run proxmox-boot-tool format and init because after executing the following:
Code:
efibootmgr -v
EFI variables are not supported on this system.

It tells me that I'm using grub, so I thought your commands are only for uefi and I have to use /usr/sbin/grub-install.real /dev/nvme0n1?


@milew hmm, running lsblk -o +UUID showed me:
Code:
nvme0n1     259:0    0 476.9G  0 disk           
├─nvme0n1p1 259:1    0  1007K  0 part           
├─nvme0n1p2 259:2    0   512M  0 part           
└─nvme0n1p3 259:3    0 476.4G  0 part            2703738732596200003
nvme1n1     259:4    0 476.9G  0 disk           
├─nvme1n1p1 259:5    0  1007K  0 part           
├─nvme1n1p2 259:6    0   512M  0 part            3272-7F43
└─nvme1n1p3 259:7    0 476.4G  0 part            2703738732596200003

so they both have 2703738732596200003, I thought they have to be different, so I executed sgdisk -G /dev/nvme0n1 again, got:
Code:
sgdisk -G /dev/nvme0n1
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
The operation has completed successfully.

and rebooted the server. After I checked it, still the same number, then I checked on my other server, but they have the same id as well. Whoops, I hope I haven't made a mess now by executing the randomise command again?
Do I have to do something else now after randomly running sgdisk -G /dev/nvme0n1 like zpool replace ... etc.?
 
oh and blkid looks like:
Code:
blkid
/dev/nvme0n1p3: LABEL="rpool" UUID="2703738732596200003" UUID_SUB="13879470365890177242" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="64a9b117-340c-41e4-9c4a-092552a00807"
/dev/nvme1n1p2: UUID="3272-7F43" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="146b79be-f7e2-4787-a1db-68d4bcca37a5"
/dev/nvme1n1p3: LABEL="rpool" UUID="2703738732596200003" UUID_SUB="7887724389835545683" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="3d1b04e0-9bc0-4e84-ad90-66de2d84918a"
/dev/nvme0n1p1: PARTUUID="9fd4e795-0f9a-4de1-9e66-c26cd8063fa9"
/dev/nvme0n1p2: PARTUUID="9c54984e-8d78-4ac2-85bb-f866008a9aaa"
/dev/nvme1n1p1: PARTUUID="f820c542-5976-4639-a23c-89551e3e2e54"

which does look off compared to my other proxmox server. Here is something wrong, I would say
 
Last edited:
Oh my bad, I misunderstood the docs I guess. Just to make sure, that I don't do something stupid. Right now, I don't need the first part.
So this can be skipped, right?
Code:
# sgdisk <healthy bootable device> -R <new device>
# sgdisk -G <new device>
# zpool replace -f <pool> <old zfs partition> <new zfs partition>

And then for my system, the commands are exactly like this using nvme0n1p2 as ESP, right? And no other commands needed?
Bash:
# proxmox-boot-tool format <new disk's ESP>
proxmox-boot-tool format /dev/nvme0n1p2

# proxmox-boot-tool init <new disk's ESP> [grub]
proxmox-boot-tool init /dev/nvme0n1p2 grub
 
"proxmox-boot-tool init /dev/nvme0n1p2" without "grub" should be enough, since for legacy (non-EFI) systems it only supports grub anyhow. but yes, that (format+init of the "new" ESP) would be the procedure.
 
Hm, format was working, but not the init

Code:
proxmox-boot-tool format /dev/nvme0n1p2
UUID="" SIZE="536870912" FSTYPE="" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="nvme0n1" MOUNTPOINT=""
Formatting '/dev/nvme0n1p2' as vfat..
mkfs.fat 4.2 (2021-01-31)
Done.


Code:
proxmox-boot-tool init /dev/nvme0n1p2
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
UUID="" SIZE="536870912" FSTYPE="" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="nvme0n1" MOUNTPOINT=""
E: '/dev/nvme0n1p2' has wrong filesystem (!= vfat).

but when executing blkid I see /dev/nvme0n1p2: UUID="FB54-A381" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="9c54984e-8d78-4ac2-85bb-f866008a9aaa", it says type vfat. Any idea?
 
hmm, "lsblk" doesn't seem to agree? can you try "lsblk --bytes --pairs -o 'UUID,SIZE,FSTYPE,PARTTYPE,PKNAME,MOUNTPOINT' /dev/nvme0n1p2" ?
 
@fabian yes, okay this command does not show vfat.
lsblk --bytes --pairs -o 'UUID,SIZE,FSTYPE,PARTTYPE,PKNAME,MOUNTPOINT' /dev/nvme0n1p2

Code:
UUID="" SIZE="536870912" FSTYPE="" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="nvme0n1" MOUNTPOINT=""
 
it seems like something got stuck/cached, could you try the init command again after a reboot?
 
  • Like
Reactions: leonidas_o
okay, indeed something was cached, after the reboot lsblk --bytes --pairs -o 'UUID,SIZE,FSTYPE,PARTTYPE,PKNAME,MOUNTPOINT' /dev/nvme0n1p2 showed the following:
Code:
UUID="FB54-A381" SIZE="536870912" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="nvme0n1" MOUNTPOINT=""

proxmox-boot-tool init /dev/nvme0n1p2 wasn't throwing an error anymore:
Code:
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
UUID="FB54-A381" SIZE="536870912" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="nvme0n1" MOUNTPOINT=""
Mounting '/dev/nvme0n1p2' on '/var/tmp/espmounts/FB54-A381'.
Installing grub i386-pc target..
Installing for i386-pc platform.
Installation finished. No error reported.
Unmounting '/dev/nvme0n1p2'.
Adding '/dev/nvme0n1p2' to list of synced ESPs..
Refreshing kernels and initrds..
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
Copying and configuring kernels on /dev/disk/by-uuid/3272-7F43
    Copying kernel 5.15.116-1-pve
    Copying kernel 5.15.131-1-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.15.131-1-pve
Found initrd image: /boot/initrd.img-5.15.131-1-pve
Found linux image: /boot/vmlinuz-5.15.116-1-pve
Found initrd image: /boot/initrd.img-5.15.116-1-pve
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
done
Copying and configuring kernels on /dev/disk/by-uuid/FB54-A381
    Copying kernel 5.15.116-1-pve
    Copying kernel 5.15.131-1-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.15.131-1-pve
Found initrd image: /boot/initrd.img-5.15.131-1-pve
Found linux image: /boot/vmlinuz-5.15.116-1-pve
Found initrd image: /boot/initrd.img-5.15.116-1-pve
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
done

and proxmox-boot-tool status now shows both entries:
Code:
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
3272-7F43 is configured with: grub (versions: 5.15.116-1-pve, 5.15.131-1-pve)
FB54-A381 is configured with: grub (versions: 5.15.116-1-pve, 5.15.131-1-pve)

finally everything looks good to me. In case, there is something else I have do, let me know.
So far, thank you very much for your help @fabian and @milew
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!