I arguably shot myself in the foot on this one, but I didn't see this particular issue covered so I thought I'd make a quick post about it.
I just bought this mini-PC with 2 M.2 slots and 4 NICs and I have gone through a bunch of installations to test various configurations.
The spec initially claimed mSATA was supported but this was either wrong or it never recognized a spare I had.
So I settled with 2 NVME Gen3x4 drives, one at x4, the other at x1 (via an adapter on the Wifi M.2).
Given quick perf measurements, I intended to use the x1 for proxmox and the x4 for VMs.
Adding a drive in the x4 slot after install triggers a rename of all the NICS (nvmeXn1 to nvmeX+1n1) which breaks network connectivity, so I started my final install with both drives in place.
That's when I made a mistake and didn't uncheck the x4 drive during the creation of the RAID0 zpool...
So, I ended up with a striped zpool across both drives, which seemed like a really bad idea given the x1 - x4 discrepancy.
I'm pretty sure the only way to fix that was to re-install, so I did, unchecking the x4 drive this time!
But that installation failed because I now had 2 rpools:
- one on the x1 drive selected during that most recent install
- one with the striped configuration over both drives
Here I was, presented with an (initramfs) prompt and cryptic (at least to me) instructions.
After a bit of research, I did the following:
And installation resumed. In the GUI, I promptly wiped the x4 drive and recreated a pool.
I thought I was done but a reboot ended up with another error (something about using a pool from a previous installation). The instructions were a bit clearer.
I might have been able to avoid these steps by doing something different during disambiguation, but I didn't retry .
The machine has been booting fine since.
HTH
I just bought this mini-PC with 2 M.2 slots and 4 NICs and I have gone through a bunch of installations to test various configurations.
The spec initially claimed mSATA was supported but this was either wrong or it never recognized a spare I had.
So I settled with 2 NVME Gen3x4 drives, one at x4, the other at x1 (via an adapter on the Wifi M.2).
Given quick perf measurements, I intended to use the x1 for proxmox and the x4 for VMs.
Adding a drive in the x4 slot after install triggers a rename of all the NICS (nvmeXn1 to nvmeX+1n1) which breaks network connectivity, so I started my final install with both drives in place.
That's when I made a mistake and didn't uncheck the x4 drive during the creation of the RAID0 zpool...
So, I ended up with a striped zpool across both drives, which seemed like a really bad idea given the x1 - x4 discrepancy.
I'm pretty sure the only way to fix that was to re-install, so I did, unchecking the x4 drive this time!
But that installation failed because I now had 2 rpools:
- one on the x1 drive selected during that most recent install
- one with the striped configuration over both drives
Here I was, presented with an (initramfs) prompt and cryptic (at least to me) instructions.
After a bit of research, I did the following:
zpool import (yielded 2 entries named/tagged rpool, where the second one showed a corrupted disk in the stripe array. Both entries have an ID).
zpool import XXX (where XXX is the ID of the pool on a single disk)
exit
And installation resumed. In the GUI, I promptly wiped the x4 drive and recreated a pool.
I thought I was done but a reboot ended up with another error (something about using a pool from a previous installation). The instructions were a bit clearer.
zpool import -f rpool
exit
I might have been able to avoid these steps by doing something different during disambiguation, but I didn't retry .
The machine has been booting fine since.
HTH