In fact, it works !! Yeahhh
I'm so happy, I now have a way to easily restaure any snapshoted version of the OS in any kind of situation, in less than 15 minutes (thanks to OVH installer and live network rescue system...)
[...[
I think now the tread can be closed
Hi all.
I've just did something similar.
Why? I wanted to resize 2x cheap 1TB nvme's used in a RAIDZ1 mirror for the rpool pool to take advantage of their space for ZFS caching but one cannot simply resize them so I made a snapshot of the root, reinstalled proxmox and placed the snapshot back in line with what jeanlau did.
My steps:
For those that use the rpool only for the proxmox OS.
Take a snapshot
I'm using zfs-auto-snapshots so there are always plenty fresh ones. Just verify that they are actually there.
Install Promox VE (terminal UI, Debug mode)
Download latest ISO and write to USB stick (used 8.1-1)
Be very mindful of disk selection!! Only select those you need to re-build the rpool pool.
With 500 mb ZRAID1 on 2x 1tb disks
DO NOT REBOOT, press Alt+F3
Install systemd-boot
apt update
apt -y install systemd-boot systemd-boot-efi
apt -y upgrade
Import rpool
zpool import rpool
Delete the volume just installed
zfs destroy rpool/ROOT/pve-1
Import the pool containing the snapshot
zpool import -f rust
#Zpool import -af
Send the snapshot to the root of promox
zfs send -vvv rust/syncoid/rpool/ROOT/pve-1@autosnap_2023-11-28_12:57:39_hourly | zfs receive -vvv rpool/ROOT/pve-1
Export pools
zpool export deadpool
zpool export rpool
Setup chroot
zpool import -f -R /mnt rpool
for i in proc dev sys run ; do mount -o rbind /$i /mnt/mnt/ROOT/pve-1/$i ; done
NB If you get a root shell before the installation, you can simply use /mnt (not /mnt/mnt/ROOT/pve-1)
Check if mounts are done correctly.
mount | grep pve-1
Enter chroot
chroot /mnt/rpool/ROOT/pve-1 /bin/bash
or (if you have not just installed a proxmox system)
chroot /mnt /bin/bash
source /etc/profile
vim /etc/kernel/cmdline # if you need to change anything
systemd-boot: cleanup
proxmox-boot-tool clean
systemd-boot: init EFI partition
# found using root@server:~# lsblk -o +FSTYPE | grep nvme[12] | grep p2
#
https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool#3._Finding_potential_ESPs
proxmox-boot-tool init /dev/<EFI partition, usually between 500-1000MB>
proxmox-boot-tool init /dev/<EFI partition, usually between 500-1000MB>
proxmox-boot-tool status
Correct root mountpoint
zfs set mountpoint=/ rpool/ROOT/pve-1
If this command fails and the system hangs on two lines starting with 'EFI' just after the bios has loaded, go back in the installer console, setup chroot again (but with /mnt) import the rpool
zpool import rpool -f
zfs set mountpoint=/ rpool/ROOT/pve-1
zpool export rpool
REBOOT
Import rpool
An emergency shell may be dropped on you
zpool import -f rpool
or import them all
zpool import -fa
REBOOT
DONE.
Sources:
https://pve.proxmox.com/wiki/ZFS:_S...xmox_Boot_Tool#Switching_to_proxmox-boot-tool
https://pve.proxmox.com/wiki/ZFS:_S...iring_a_System_Stuck_in_the_GRUB_Rescue_Shell
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysboot
I think now the tread can be closed again ;-)
This is known to work. I had problems earlier that appeared difficult to troubleshoot (bios update, nvme firmware update, kernel upgrade, move from grub to systemd-boot, disk swap, cpu scheduler and mode change all in one go). Yes, I got over enthousiastic while having a bit of time on my hands at last. All fixed now though!
Cool article about what is possible (e.g., snapshot before upgrade)
https://oblivious.observer/posts/poc-boot-environments-proxmoxve6/
PS Sorry for the flat list. No time to format it now.