[SOLVED] Adding nvme ssd breaks rpool

azim

New Member
Aug 13, 2024
2
0
1
So about half a year ago, i built my home server and installed proxmox on it.
I chose to use dual sata ssd-s as a zfs mirror for the system, and chose that during installation.

Now, half a year later, i decided i want to add another nvme ssd to the system, and was greeted by following message upon boot:
1723578944753-png.72985
And quick search reveals that the likely cause is the use of /dev/sd? names when creating the pool.
After removing nvme ssd and rebooting, i checked the rpool status and the result confirmed that it indeed used the /dev/sd? names.
At this point, i dont remember the exact install process, maybe there was a checkbox i didnt check, but im not sure that such a behavior should be a default.
Now, the real question is, how do i work around all that?
Is there a way to move the rpool zfs mirror to use /by-id/ ?
 

Attachments

  • 1723578944753.png
    1723578944753.png
    416.1 KB · Views: 29
Have you already resolved the problem in the mean time? If not, you're probably right about that the system has reordered the drive names and now the ZFS filesystem cannot be mounted as the newly connected SSD is not part of the zpool yet.

You can try to boot the system with the new SSD disconnected, set the ZPOOL_IMPORT_PATH in the /etc/default/zfs to ZPOOL_IMPORT_PATH="/dev/disk/by-id" and then run update-initramfs -u -k all to apply the changes. Afterwards shutdown the host, reconnect the new SSD and it should boot as before.
 
Yes, sorry, forgot to write that i solved it here.
Turns out the issue was a bit different:
- the pcie order did get screwed up by nvme
- system was booting properly initially, but was degrading sometime during boot process
- after investigating more, i discovered that my truenas vm, which was supposed to be getting HBA card, was instead getting motherboard's sata controller to which the system ssds were attached, which in turn was causing them to fail, breaking rpool.

i disabled autoboot for all my VMs and containers with nvme removed, and then added an nvme and updated settings for VMs, which solved the issue
 
Thanks for sharing your solution with the forum! It will surely help other people facing similar issues.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!