Problems disabling ZFS pool import on boot up

javierolme

New Member
Nov 1, 2023
3
0
1
Hello.
I am in the process of moving all the functions of my home lab to proxmox VE and I am new in Linux hypervisors, and my Linux knowledge is basic. Please forgive me if I made rookie mistakes.
My problem is trying virtualizing truenas and his zpools. I have three zfs pools already configured and functional for a long time iniside truenas (several of them, including metadata drives, but not cache drives yet) . All of these drives are managing with a exclusive sata pcie controller.
I am unable to disable the automatic import of those zpools at proxmox startup no matter how hard I try. Neither (although that would not be the definitive solution) import these manually when boot enters in busybox.
The only way I achieve to start proxmox is to physically disconnect the sata controller.

Already followed all the steps I have seen inside and outside the forum, also starting from scratch, etc. In my opinion the most appropriate and useful thread of all is:
https://forum.proxmox.com/threads/how-to-prevent-zfs-pool-import-on-boot-up.132990/
Some others I've tried:
https://forum.proxmox.com/threads/proxmox-failed-to-start-import-zfs-pool-pool-name.117379/
https://forum.proxmox.com/threads/preventing-proxmox-from-importing-zpools-at-boot.25977/

After the modifications run "pve-efiboot-tool refresh", "update-initramfs -u -k all" and "proxmox-boot-tool refresh", Just in case. Also tried deleting zpool.cache.

There is no zpool in my installation apart from the rpool and the Truenas ones.

My specs. Only list rpool because i´m unable start with the truenas zpools attached:
root@pve2:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 464G 47.3G 417G - - 0% 10% 1.00x ONLINE -

root@pve2:~# zpool get cachefile
NAME PROPERTY VALUE SOURCE
rpool cachefile - default

Import services are like this (device scanning and cache import have already been disabled following the instructions in the first thread I mentioned):

root@pve2:~# systemctl status zfs-import.service zfs-import-cache.service zfs-import-scan.service
○ zfs-import.service
Loaded: masked (Reason: Unit zfs-import.service is masked.)
Active: inactive (dead)

○ zfs-import-cache.service - Import ZFS pools by cache file
Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; disabled; preset: enabled)
Active: inactive (dead)
Docs: man:zpool(8)

○ zfs-import-scan.service - Import ZFS pools by device scanning
Loaded: loaded (/lib/systemd/system/zfs-import-scan.service; disabled; preset: disabled)
Active: inactive (dead)
Docs: man:zpool(8)

Nothing works. At startup always start the script importing the zpools and there is no way to start proxmox.
I'm desperate. I have the impression that is a noob mistake and I can't update initramfs or any other system process to boot.

I would greatly appreciate your help.
 
All of these drives are managing with a exclusive sata pcie controller.
I simply blacklist my HBA in PVE. With PVE not trying to initialize the HBA card it also can't see the disks and therefore not the ZFS pools on it.
 
Yeah. Last night I was able to spend some time with it.
It seems to work. Had additional problems because there are another identical HBA inside the system with the same ID. But it seems that I was able to solve it by adding the bus number.
I already saw yesterday that it is not the right way to solve this problem: I will come back to it later.

Now I'm going to spend time on it, installing Truenas, importing pools, generating caches, etc.
I want also reactivate zfs-import-cache.service (need find the appropriate command).
The HBA blacklist system actually seems to be a good method, although I don't quite understand why I can't do it using the zfs-import by cache and by device method.

Thank you very much for the help, let's see if I can now get Truenas working today.
 
Hey just FYI I was have all kinds of problems with my server build trying to get TrueNAS scale working as a VM in Proxmox and getting the HBA to passthrough. Not sure if this will help you but one of the major problems was that when I had physically built the server I had 6 HDDs intended for my raidz2 array and a SATA SSD intended for my Proxmox boot/VM drive. I had two mini SASHD to SATA connector cables (the kind that split from one mini SAS to 4 SATA). I used one to plug the first 4 HDDs to the HBA. I then used the second mini SAS cable to attach the last 2 HDDs to the HBA. Then, without really thinking about it, I used one of the unused SATA strands off the mini SAS connector to plug in the boot SSD to the same HBA. Mind you this was about a month before I got to the software side of things and being totally new to this I wasn't even thinking about PCIE pass throughs at the time. Imagine what happens when you install your Proxmox to the SSD and then you passthrough the HBA. Basically the boot drive would fail to show up as I had inadvertently tied it to the same HBA that I was then passing through to TrueNAS.

Solution was to unplug the SSD. Install Proxmox on one of the onboard 1TB M2 NVME drives. I put the TrueNAS VM on that NVME (allocating 100GB). Also, under hardware in the Proxmox TrueNAS VM settings I used SATA as the drive type. For the first time everything installed and loaded and I now have my 48TB TrueNAS vdev using the 6 HDDs via the HBA and Proxmox and VMs living on the onboard NVME.

I just ordered a third mini SAS to SATA cable to attach the original SATA SSD directly to my motherboards SAS connector (not the HBA). I plan to move my proxmox install to the SATA SSD and use one of the NVMEs for VMs and the second NVME for my TrueNAS cache.