Will my system survive a reboot?

Amplificator

Member
Aug 5, 2020
21
2
23
38
I have a Proxmox 6.4 system where I have, at this point, replaced every single drive in it.

I think I have rebuilt grub as well but I havn't actually rebooted my system yet because I'm not 100% sure that what I have done is correct and that it will actually be able to boot up again.

In my server I have 8 drives and it's set up using ZFS.

When I replaced a drive I took it offline, took the defective drive out and put in a new one and copied the GPT partition table from one of the existing working drives already in the system to the new drive I just put in. I used these commands:

sgdisk /dev/sda -R /dev/sdg sgdisk -G /dev/sdg

After that was done I used "zpool replace" to replace the old drive with the new one and waited for it to finish resilvering.

When it eventually got done resilvering I ran these commands to prepare and set up the drive for grub:

proxmox-boot-tool format /dev/sda2 --force proxmox-boot-tool init /dev/sda2 proxmox-boot-tool clean proxmox-boot-tool status

After which it completed setting up grub for the newly added drive.

I have done the above procedure for all 8 drives at this point but as I said, I have never rebooted yet.

Can anyone please tell me if what I did above is correct or, in case it is not, what I should do to make sure it would reboot correctly when it would be time to do so?
Is there any way I can verify that it will indeed boot without issues?
 
first and most important: Make sure you have a working backup of all data you need!

It would help if you would post the outputs of the proxmox-boot-tool commands - else it's hard to see what the current state ist

Additionally please post the output of `lsblk`

make sure to check out the wiki-page for changing to proxmox-boot-tool (not exactly your use-case - but the information should be helpful, and it contains the steps to repair a system should it not boot):

https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool

I hope this helps!
 
I have backups and they work - I check that regularly. So I'm not really worried for data loss as much as the down time.

"lsblk -o +FSTYPE" outputs:

Code:
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT FSTYPE
sda      8:0    1 953.9G  0 disk            zfs_member
├─sda1   8:1    1  1007K  0 part            zfs_member
├─sda2   8:2    1   512M  0 part            vfat
└─sda3   8:3    1 953.4G  0 part            zfs_member
sdb      8:16   1 953.9G  0 disk            zfs_member
├─sdb1   8:17   1  1007K  0 part            zfs_member
├─sdb2   8:18   1   512M  0 part            vfat
└─sdb3   8:19   1 953.4G  0 part            zfs_member
sdc      8:32   1 953.9G  0 disk            zfs_member
├─sdc1   8:33   1  1007K  0 part            zfs_member
├─sdc2   8:34   1   512M  0 part            vfat
└─sdc3   8:35   1 953.4G  0 part            zfs_member
sdd      8:48   1 953.9G  0 disk            zfs_member
├─sdd1   8:49   1  1007K  0 part            zfs_member
├─sdd2   8:50   1   512M  0 part            vfat
└─sdd3   8:51   1 953.4G  0 part            zfs_member
sde      8:64   1 953.9G  0 disk            zfs_member
├─sde1   8:65   1  1007K  0 part            zfs_member
├─sde2   8:66   1   512M  0 part            vfat
└─sde3   8:67   1 953.4G  0 part            zfs_member
sdf      8:80   1 953.9G  0 disk            zfs_member
├─sdf1   8:81   1  1007K  0 part            zfs_member
├─sdf2   8:82   1   512M  0 part            vfat
└─sdf3   8:83   1 953.4G  0 part            zfs_member
sdg      8:96   1 953.9G  0 disk            zfs_member
├─sdg1   8:97   1  1007K  0 part            zfs_member
├─sdg2   8:98   1   512M  0 part            vfat
└─sdg3   8:99   1 953.4G  0 part            zfs_member
sdh      8:112  1 953.9G  0 disk            zfs_member
├─sdh1   8:113  1  1007K  0 part            zfs_member
├─sdh2   8:114  1   512M  0 part            vfat
└─sdh3   8:115  1 953.4G  0 part            zfs_member
zd0    230:0    0    96G  0 disk
zd16   230:16   0    32G  0 disk
zd32   230:32   0    32G  0 disk
zd64   230:64   0     5T  0 disk

"proxmox-boot-tool status" outputs:

Code:
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
D98A-C2F1 is configured with: grub (versions: 5.4.128-1-pve, 5.4.143-1-pve)
D9D1-1321 is configured with: grub (versions: 5.4.128-1-pve, 5.4.143-1-pve)
DA16-9218 is configured with: grub (versions: 5.4.128-1-pve, 5.4.143-1-pve)
DA5D-5B54 is configured with: grub (versions: 5.4.128-1-pve, 5.4.143-1-pve)
DA9C-2216 is configured with: grub (versions: 5.4.128-1-pve, 5.4.143-1-pve)
DAD4-87C8 is configured with: grub (versions: 5.4.128-1-pve, 5.4.143-1-pve)
DB15-C3E7 is configured with: grub (versions: 5.4.128-1-pve, 5.4.143-1-pve)
DB4E-EFAB is configured with: grub (versions: 5.4.128-1-pve, 5.4.143-1-pve)

"ls -lah /dev/disk/by-uuid/" outputs:

Code:
total 0
drwxr-xr-x 2 root root 220 Dec 24 00:47 .
drwxr-xr-x 7 root root 140 Nov 25 20:22 ..
lrwxrwxrwx 1 root root  10 Nov 27 01:40 14397245595010639865 -> ../../sdh3
lrwxrwxrwx 1 root root  10 Nov 27 01:37 D98A-C2F1 -> ../../sda2
lrwxrwxrwx 1 root root  10 Nov 27 01:37 D9D1-1321 -> ../../sdb2
lrwxrwxrwx 1 root root  10 Nov 27 01:37 DA16-9218 -> ../../sdc2
lrwxrwxrwx 1 root root  10 Nov 27 01:38 DA5D-5B54 -> ../../sdd2
lrwxrwxrwx 1 root root  10 Nov 27 01:38 DA9C-2216 -> ../../sde2
lrwxrwxrwx 1 root root  10 Nov 27 01:38 DAD4-87C8 -> ../../sdf2
lrwxrwxrwx 1 root root  10 Nov 27 01:39 DB15-C3E7 -> ../../sdg2
lrwxrwxrwx 1 root root  10 Nov 27 01:40 DB4E-EFAB -> ../../sdh2

I hope that info helps :)
 
Last edited:
  • Like
Reactions: Stoiko Ivanov
I hope that info helps :)
It does - from the outputs it looks that all the disks should be bootable and properly configured.
This is usually a very good indicator that the system will reboot quite happily

but as I said - always make sure you have a working backup - and also check out the wikipage
 
Thanks.

I do have an additional question.
The link you posted only lists the proxmox-boot-tool commands for preparing the disks and not the sgdisk command that I used (or a similar command).

Does that mean that proxmox-boot-tool format will create the 3 partitions itself?

I used the sgdisk commands to copy the partition layout from a working disk to a brand new one, but it proxmox-boot-tool format does the same I can skip the sgdisk command in the future.