ZFS Online but unable to import

Potatoes1921

New Member
Feb 23, 2023
19
0
1
I know this is a pretty common question, but I haven't found a situation quite like mine.To cut it short, I had a ZFS pool on my home server on bare metal of proxmox, wanted to move it to an open media vault installation. It went fine, but then i removed the zfs storage from proxmox, not realizing i put the stupid OS drive of the OMV installation on that zfs pool... So then I attempt to export the pool, and create a whole new OMV installation, and am now unable to reimport it. All of the pool seems fine and healthy, all disks are showing online, it is just refusing to import. And since its my home setup and I haven't got the money for a secondary proper backup, this was all I had of my Plex collection.. Any advice would be greatly appreciated.
My current thought process is to turn it into a read-only pool (if that works) and then copy that to an external drive I have and recreate the pool. Would prefer if I didn't have to do that, but I understand I screwed myself as is.I attempted to import as read-only and this is what came up now.
 

Attachments

  • 4RpXN.png
    4RpXN.png
    24 KB · Views: 14
  • krnjG.png
    krnjG.png
    67.2 KB · Views: 14
  • zceSE.png
    zceSE.png
    77.3 KB · Views: 14
The problem with "last accessed by another system" is not uncommon. You can use the -f parameter to force it (like it says in the message), if you're sure you want to import it (and no longer use it in that other system). Or do I misunderstand your issue?
 
The problem with "last accessed by another system" is not uncommon. You can use the -f parameter to force it (like it says in the message), if you're sure you want to import it (and no longer use it in that other system). Or do I misunderstand your issue?
Hi sorry I thought I included that I tried that, and it ended up saying one or more devices is currently unavailable.. But it clearly shows all 3 disks are online

1706985369456.png
 
Hi sorry I thought I included that I tried that, and it ended up saying one or more devices is currently unavailable.. But it clearly shows all 3 disks are online
The pool is configured to use whole drives but the drives have partitions and it looks like one the first partition should be used by the pool. I don't know how you managed to create this issue or how to fix that.
 
The pool is configured to use whole drives but the drives have partitions and it looks like one the first partition should be used by the pool. I don't know how you managed to create this issue or how to fix that.
Hm, Idk I created it the same way I did every other zfs pool, maybe they got funked up from being imported and ripped around between hosts? Heres my disks as proxmox sees it 1706990470421.png
 
Hm, Idk I created it the same way I did every other zfs pool, maybe they got funked up from being imported and ripped around between hosts? Heres my disks as proxmox sees it View attachment 62509
Yes, I say this already but note that /dev/sd* is not stable. Your pool wants to use /dev/sda, sdd and sde but it should use /dev/sda1, sdb1 and sde1. As I said, I don't know how to cause this or fix it. This is not Proxmox specific, so you might want to search the internet and/or ask people who are knowledgeable about ZFS.
 
Yes, I say this already but note that /dev/sd* is not stable. Your pool wants to use /dev/sda, sdd and sde but it should use /dev/sda1, sdb1 and sde1. As I said, I don't know how to cause this or fix it. This is not Proxmox specific, so you might want to search the internet and/or ask people who are knowledgeable about ZFS.
Hm well I created the pool by referencing /dev/disk/by-id... so strange. Well thanks for the pointers, just gonna go cry a bit then take a look at this later I guess lmao
 
Hm well I created the pool by referencing /dev/disk/by-id... so strange. Well thanks for the pointers, just gonna go cry a bit then take a look at this later I guess lmao
Ask the ZFS on Linux people or other people on the inter. I'm sure some people might be able to help, I'm just not knowledgeable enough,.
 
I think I found a similar issue here: https://github.com/openzfs/zfs/issues/2966. Try labeling the partitions (and reboot or run partprobe) and importing it with -d /dev/disk/by-partlabel.
And a similar issue with ZFS on Windows: https://github.com/openzfsonwindows/openzfs/issues/91
Unfortunately that didn't work, I still get the "cannot import 'bulk_storage': one or more devices is currently unavailable" after renaming the partitions, I even did both of them just to be sure. I created the zfspool by their ID using /dev/disk/by-id/, so I wouldn't think that would of worked but not sure. I'm just not sure what storage its trying to use or grab thats busy, the meta data partition seems fine to me I don't know why that would be an issue either.
I did post on other ZFS and server forums, but most of them got ghosted and maybe only a couple of replies that didn't lead to anything. I don't want to spam everywhere, but I also don't want to have to reacquire my collection.

"
zpool import -fd /dev/disk/by-partlabel bulk_storage
cannot import 'bulk_storage': one or more devices is currently unavailableScreenshot_2.png
 
Last edited:
zpool import -f bulk_storage ?
My first post, 2nd image already has it. It keeps reporting a device is unavailable, but I've yet to be able to find out what device, as there's only supposed to be the 3 drives, which are all showing online.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!