Zpool import not working

The import should only happen on the host that "owns" the drive. Since you gave the drives to the VM only it should manage them.
Checking SMART info and such on the node it fine, of course, but if you try to import/mount it on both sides you will very likely get in trouble.
My suggestion is to make sure that Goose and StoragePool are not listed on the node's zpool list and then try to import Goose inside the VM via the mentioned commands.
For the future the best practice when dealing with ZFS is to pass the whole HBA to the VM via PCI(e) passthrough. Or as mentioned earlier to let the node manage the ZFS pool and use ext4 or something inside the VM on a virtual disk.
 
Last edited:
I tried zpool import Goose like before and get: "Cannot import "Goose": one or more devices is currently unavailable.

This was one reason I was trying to figure out if something died physically. I did the above in the vm itself. I think pcie passthrough is what I should've done. We can fix that after. Just was trying to get this going to back up and then try and fix the whole thing again. One thing at a time as my wife tells me haha. I did not try the "by id" command as I'll be honest, I'm not really sure how I would do that with 4 disks that have 4 different IDs.
 
It's just that single command. ZFS should find the disks belonging to that pool in that path and assemble it.
Can you also check journalctl -r immediately after running the import? maybe there's something interesting in there.
If that doesn't work or there's nothing useful logged I have no other suggestion at the moment that I feel comfortable giving. I'm guessing that it was attempted to be imported from both the node and VM at some point and broke something but I'm not sure.
 
Last edited:
That's all I have I guess. :/ That's a bummer...
 

Attachments

  • ProxMoxPool6.PNG
    ProxMoxPool6.PNG
    4.2 KB · Views: 6
  • ProxMoxPool7.PNG
    ProxMoxPool7.PNG
    517.7 KB · Views: 6
:(
The text is cut off but that does not look good. Can you use a SSH client to grab the logs or scroll to the right?
 
So I was able to ssh in but...it's still cut off and I can't scroll. I will have to resume whatever you would like tomorrow. I know you're really trying and I really appreciate the help. Thank you. Seriously...
 

Attachments

  • ProxMoxPool9.PNG
    ProxMoxPool9.PNG
    319.9 KB · Views: 7
@Neobin I am not trying to be rude, but would you like to contribute?

Sure:
  • Try to import the pool inside the VM with: zpool import -o readonly=on Goose [1a] [1b] to possibly get at least read access. -f might additionally be needed.
    (In the documentation are more options one could try (as a last resort)...)
  • If this does not work, completely shutdown that VM and try the same on the PVE-host
  • If this also does not work, destroy and recreate the pool and restore from backup; but this time either use it directly on the PVE-host [2a] [2b] or with PCIe-passthrough [3], instead of disk-passthrough [4]...

[1a] https://openzfs.github.io/openzfs-docs/man/v2.2/8/zpool-import.8.html
[1b] https://openzfs.github.io/openzfs-docs/man/v2.2/7/zpoolprops.7.html
[2a] https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#chapter_zfs
[2b] https://pve.proxmox.com/pve-docs/chapter-pvesm.html#storage_zfspool
[3] https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_pci_passthrough
[4] https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)
 
@Neobin @Impact Good morning.

I tried the above and the output says IO error.

Secondly, I don't have a backup...I did at one point have a full backup of at least the data and unfortunately, the drive died.
 

Attachments

  • ProxMoxPool10.PNG
    ProxMoxPool10.PNG
    18 KB · Views: 5
  • ProxMoxPool11.PNG
    ProxMoxPool11.PNG
    28.5 KB · Views: 4
The log above may just be a quick info as to what's up : cat /proc/spl/kstat/zfs/dbgmsg

Maybe try import -FXn first as it may take a while to see if its possible to recover... may take hours.. see if it even attempts to try.
 
The ubuntu server is the one that has the zpool? (The first log has a bunch of errors from the storage pool not matching the name but I guess we dont care about those partitions)

can you do zpool status from the ubuntu server ?