[SOLVED] via reinstall. I realized that I would not incur delay in restoring my VMs from backup as long as I had them in the ZFS pool. So I re-installed Proxmox VE 8, reattached the pool and recovered my VMs.
----
Hi everyone,
Here is my situation:
I have a Dell R620 with a H310mini flashed to LSA that has been running Proxmox VE for 2 years. I needed to replace the old SSDs because of wearout.
My old disk configuration was bad: I had put all 4 SSDs in ZFS RAID-Z1 mode during installation which I later realized had absolutely no advantage; because the disks were identically partitioned and only the large storage partition was using ZFS, I would not be able to easily recover from a single disk failure anyway.
So I connected the old Proxmox VE system to a Proxmox Backup System and backed up all my VMs. I then removed the old disks and replaced them with a
new configuration: a 120GB SSD as boot drive and 3 identical 960GB enterprise SSDs that I plan to use for ZFS mirror (1+1) with 1 spare.
I installed Proxmox VE 7.4-1 onto the 120GB SSD and booted fine. I then manually created a ZFS mirror pool from two of the 960GB SSDs and called it local-zfs. Then, I restored the contents of /etc/ and /etc/pve related to network, VM and remote storage configuration and everything looked perfect: my new machine saw my Backup server, I saw my all VMs and I was able to restore them all from backup.
The problem began soon afterwards when I installed some OS updates and finally decided to reboot because there was a new kernel among those updates. After I rebooted, I got the dreaded
This problem has been well documented and it has a solution that includes adding some timeout somewhere in the GRUB configuration. However, the way I understand things, this solution does not relate to my problem. That fixable failure is caused by inability to mount an existing pool fast enough. In my case, the OS tries to look for a pool called rpool that doesn't exist and has never existed. I can issue a 'zpool import local-zfs' command from Busybox but if I reboot as instructed in the popular solution I end up at the very same place.
In my boot configuration I read:
I don't understand why it has been created like this, provided that I do not have the 'rpool' pool and my OS does not use ZFS to boot.
I guess I could try to reinstall everything but it takes 4-5 hours to restore the VMs from backup and I really don't want to have to do this. Besides, I am not at all convinced the result would be different. Even if I just repeat the install then try to update & reboot, and fail again because I don't have a ZFS pool called 'rpool', I will only guarantee myself that I would have to restore my VMs again.
Does this situation sound familiar to anyone? Could anybody offer a hint towards a solution?
Thanks in advance to anyone who read through everything
----
Hi everyone,
Here is my situation:
I have a Dell R620 with a H310mini flashed to LSA that has been running Proxmox VE for 2 years. I needed to replace the old SSDs because of wearout.
My old disk configuration was bad: I had put all 4 SSDs in ZFS RAID-Z1 mode during installation which I later realized had absolutely no advantage; because the disks were identically partitioned and only the large storage partition was using ZFS, I would not be able to easily recover from a single disk failure anyway.
So I connected the old Proxmox VE system to a Proxmox Backup System and backed up all my VMs. I then removed the old disks and replaced them with a
new configuration: a 120GB SSD as boot drive and 3 identical 960GB enterprise SSDs that I plan to use for ZFS mirror (1+1) with 1 spare.
I installed Proxmox VE 7.4-1 onto the 120GB SSD and booted fine. I then manually created a ZFS mirror pool from two of the 960GB SSDs and called it local-zfs. Then, I restored the contents of /etc/ and /etc/pve related to network, VM and remote storage configuration and everything looked perfect: my new machine saw my Backup server, I saw my all VMs and I was able to restore them all from backup.
The problem began soon afterwards when I installed some OS updates and finally decided to reboot because there was a new kernel among those updates. After I rebooted, I got the dreaded
Failed to import pool 'rpool'
message in Busybox.This problem has been well documented and it has a solution that includes adding some timeout somewhere in the GRUB configuration. However, the way I understand things, this solution does not relate to my problem. That fixable failure is caused by inability to mount an existing pool fast enough. In my case, the OS tries to look for a pool called rpool that doesn't exist and has never existed. I can issue a 'zpool import local-zfs' command from Busybox but if I reboot as instructed in the popular solution I end up at the very same place.
In my boot configuration I read:
linux /boot/vmlinuz-6.2.16-5-pve root=/dev/mapper/pve-root ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
I don't understand why it has been created like this, provided that I do not have the 'rpool' pool and my OS does not use ZFS to boot.
I guess I could try to reinstall everything but it takes 4-5 hours to restore the VMs from backup and I really don't want to have to do this. Besides, I am not at all convinced the result would be different. Even if I just repeat the install then try to update & reboot, and fail again because I don't have a ZFS pool called 'rpool', I will only guarantee myself that I would have to restore my VMs again.
Does this situation sound familiar to anyone? Could anybody offer a hint towards a solution?
Thanks in advance to anyone who read through everything
Last edited: