Zpool import not working

MEpke

Member
Sep 5, 2024
48
0
6
Hey all! Proxmox VE using the shell.

Shows my pool and I've tried different commands to get my pool in. Why would it show online but not accessible at the same time? I ran smartctl on all drives and everything passed. Was working last night. Not sure what's going on? Screenshot posted. Ideas and help needed. Thanks!

1753485133783.png
 
It's one pool so just once. Can you also share lsblk -o+FSTYPE,MODEL?


What do you mean with that?

It's the bottom drives
SDB
SDC
SDD
SDE

I use a Dell Perc H200 in IT mode to pass through the drives to the VMs. It's worked for the last...4 years?
 

Attachments

  • IMG_3605.jpeg
    IMG_3605.jpeg
    162.9 KB · Views: 6
Please don't unneccesarily quote. If you pass the disks to a VM you probably should not try to manage them from the node side in any way and manage the pool from inside the VM. I need more information how you passed it through.
 
Last edited:
I
Please don't unneccesarily quote. If you pass the disks to a VM you probably should not try to manage them from the node side in any way and manage the pool from inside the VM. I need more information how you passed it through.
I don't know what unnecessarily quoting is. This is just how I communicate?
I was only looking node side as it seems everything is giving the same errors. The last VM to use these was my Ubuntu server VM. Everything else stays off.

What would you like me to do?

Thank you.
 
This is unneccesarily as my message is already above. Just don't use the Reply button :)
1753503825811.png

I'd like to see qm config VMIDHERE --current for all the VMs where you pass disks through.
 
Understood. Didn't realize you could do that. I'm not on here much. I'm not doing this on the VM unless that's where I should be doing it?
 

Attachments

  • IMG_3607.jpeg
    IMG_3607.jpeg
    219.3 KB · Views: 6
Please don't unneccesarily quote.
This is unneccesarily as my message is already above. Just don't use the Reply button :)
1753503825811.png

To be honest, your constant urging regarding this topic distracts my reading by far more than the full quote...
 
@Neobin I try to keep this short and only do that if full-quoted multiple times. My 4 words of urging vs hundreds of them repeated. Ads are hated because they get in the way of content you want to read. Just noise you have to ignore. This is the same. I figure most people are not intentionally doing this. I made a suggestion here about how to make this less likely to occur: https://forum.proxmox.com/threads/please-disable-the-forums-reply-button.167734/
I only shared the picture because they asked for elaboration and this whole text because you complained about my urging. I do not want to make this a big deal, I'm only here to learn and help so I'll get back to topic now.


@MEpke Yeah in this case you should not try to do anything with the disks on the node as it can cause corruption. Hopefully that didn't already happen.
Is there a reason against letting the node manage the pool and simply giving the VM a virtual disk from the storage?
If you pass the disks to a TrueNAS VM I'd understand but this is a Ubuntu VM. RAW device mapping like this isn't recommended when using ZFS on the disk, by the way.

What does this inside the VM say?
Bash:
zpool status -vP
zpool import
 
Last edited:
@Neobin my 4 words vs hundreds of them repeated.

4 words (or more like in this actual case: 44*), that I have to read over and over again vs. hundreds of them, that I can easily skip because they are visually separated.

*:
Please don't unneccesarily quote.
I don't know what unnecessarily quoting is. This is just how I communicate?
This is unneccesarily as my message is already above. Just don't use the Reply button :)
Understood. Didn't realize you could do that. I'm not on here much.
 
@Impact Ok....so truth be told. I knew nothing when I did all this originally.

Originally this was just a Truenas(may have been freenas at the time) on a desktop.

I created a 4 disk raid z1 and that was that. Fast forward to the proxmox server build. I learned it all on my own and didn't know any better. Just what I read or was told.

This is just what someone or something said to do at the time. I'm open to suggestions. Just trying to get my 8TB of data back for now. That's the top priority.

This is inside the Ubuntu VM. The Goose pool is what we are concerned with. The storage pool was something I was guessing with but decided against it as I didn't know what I was doing when I first set it all up. I believe it may have been a proxmox pool as you mentioned before.

SDA when looking at the disks was my proxmox boot ssd. Just for the record.
 

Attachments

  • ProxMoxPool1.PNG
    ProxMoxPool1.PNG
    26.6 KB · Views: 9
Last edited:
@Neobin I am not trying to be rude, but would you like to contribute?

At first, I thought @Impact was being rude, however, he took the time to explain what he meant. I may not have liked the way it was asked/stated, but he is correct. In the end, "you don't know, what you don't know." I didn't know and now I do. Thank you @Impact .
 
To be honest I'm a bit confused and uncertain here. The importable pool info shows the second partition (sdc2) rather than the first (sdc1).
In the lsblk -o+FSTYPE,MODEL output from earlier I can see there's a 2G partition of type ZFS on all those disks. It's almost as if there's two pools on each disk. Does zfs list look as you expect?
 
Last edited:
@Impact Here is what my "passthroughs" look like if you were interested. I honestly don't know. It worked, now it doesn't. I would like to know how I can see if maybe the Raid card died? My ssd has a lights on the "hard drive caddies" and the others do not. However, for the life of me I can't remember if they did before or not. :/ The scsi 1-4 (I believe) are the passthroughs. :) 0 would be the boot.
 

Attachments

  • ProxMoxPool2.PNG
    ProxMoxPool2.PNG
    46.5 KB · Views: 5
@Impact No, zfs list ONLY shows storagepool. Where, it showed Goose before AND storagepool. :/ If you need a picture, I can provide.
 
Last edited:
qm config already provided that information :)
Can you try zpool import Goose or zpool import Goose -d /dev/disk/by-id/ inside the VM? I don't know at the top of my head how by-id names raw mapped disks like this. Another by-* might be better or just /dev/ which is probably the default?
To check if your HBA died try lspci -k and see if it shows up. Also see ls -l /dev/disk/by-path/ to check to which PCI(e) device a disk is connected to. Do these commands on the node. lsblk -o+FSTYPE,VENDOR,MODEL,SERIAL,TRAN might help to identify your drives. Is one missing?
I should have mentioned, make sure the node has these pools exported.
 
Last edited:
lspci -k spit this out. I made it bigger to see. That should be the Raid controller.

00:00.0 Host bridge: Intel Corporation 5520 I/O Hub to ESI Port (rev 13)
Subsystem: Dell 5520 I/O Hub to ESI Port
00:01.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 1 (rev 13)
Kernel driver in use: pcieport
00:03.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 3 (rev 13)
Kernel driver in use: pcieport
00:04.0 PCI bridge: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 4 (rev 13)
Kernel driver in use: pcieport
00:05.0 PCI bridge: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 5 (rev 13)
Kernel driver in use: pcieport
00:06.0 PCI bridge: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 6 (rev 13)
Kernel driver in use: pcieport
00:07.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 7 (rev 13)
Kernel driver in use: pcieport
00:09.0 PCI bridge: Intel Corporation 7500/5520/5500/X58 I/O Hub PCI Express Root Port 9 (rev 13)
Kernel driver in use: pcieport
00:14.0 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub System Management Registers (rev 13)
Kernel driver in use: i7core_edac
Kernel modules: i7core_edac
00:14.1 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub GPIO and Scratch Pad Registers (rev 13)
00:14.2 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub Control Status and RAS Registers (rev 13)
00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02)
Subsystem: Dell PowerEdge R710 USB UHCI Controller
Kernel driver in use: uhci_hcd
Kernel modules: uhci_hcd
00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02)
Subsystem: Dell PowerEdge R710 USB UHCI Controller
Kernel driver in use: uhci_hcd
Kernel modules: uhci_hcd
00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02)
Subsystem: Dell PowerEdge R710 USB EHCI Controller
Kernel driver in use: ehci-pci
Kernel modules: ehci_pci
00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02)
Subsystem: Dell PowerEdge R710 USB UHCI Controller
Kernel driver in use: uhci_hcd
Kernel modules: uhci_hcd
00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02)
Subsystem: Dell PowerEdge R710 USB UHCI Controller
Kernel driver in use: uhci_hcd
Kernel modules: uhci_hcd
00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02)
Subsystem: Dell PowerEdge R710 USB EHCI Controller
Kernel driver in use: ehci-pci
Kernel modules: ehci_pci
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92)
00:1f.0 ISA bridge: Intel Corporation 82801IB (ICH9) LPC Interface Controller (rev 02)
Subsystem: Dell 82801IB (ICH9) LPC Interface Controller
Kernel driver in use: lpc_ich
Kernel modules: lpc_ich
00:1f.2 IDE interface: Intel Corporation 82801IB (ICH9) 2 port SATA Controller [IDE mode] (rev 02)
Subsystem: Dell PowerEdge R710 SATA IDE Controller
Kernel driver in use: ata_piix
Kernel modules: pata_acpi
01:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
DeviceName: Embedded NIC 1
Subsystem: Dell PowerEdge R710 BCM5709 Gigabit Ethernet
Kernel driver in use: bnx2
Kernel modules: bnx2
01:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
DeviceName: Embedded NIC 2
Subsystem: Dell PowerEdge R710 BCM5709 Gigabit Ethernet
Kernel driver in use: bnx2
Kernel modules: bnx2
02:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
DeviceName: Embedded NIC 3
Subsystem: Dell PowerEdge R710 BCM5709 Gigabit Ethernet
Kernel driver in use: bnx2
Kernel modules: bnx2
02:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
DeviceName: Embedded NIC 4
Subsystem: Dell PowerEdge R710 BCM5709 Gigabit Ethernet
Kernel driver in use: bnx2
Kernel modules: bnx2
03:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
DeviceName: Integrated SAS
Subsystem: Dell PERC H200 Integrated
Kernel driver in use: mpt3sas
Kernel modules: mpt3sas

08:03.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200eW WPCM450 (rev 0a)
DeviceName: Embedded Video
Subsystem: Dell PowerEdge R710 MGA G200eW WPCM450
Kernel driver in use: mgag200
Kernel modules: matroxfb_base, mgag200
fe:00.0 Host bridge: Intel Corporation Xeon 5600 Series QuickPath Architecture Generic Non-core Registers (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series QuickPath Architecture Generic Non-core Registers
fe:00.1 Host bridge: Intel Corporation Xeon 5600 Series QuickPath Architecture System Address Decoder (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series QuickPath Architecture System Address Decoder
fe:02.0 Host bridge: Intel Corporation Xeon 5600 Series QPI Link 0 (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series QPI Link 0
fe:02.1 Host bridge: Intel Corporation Xeon 5600 Series QPI Physical 0 (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series QPI Physical 0
fe:02.2 Host bridge: Intel Corporation Xeon 5600 Series Mirror Port Link 0 (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Mirror Port Link 0
fe:02.3 Host bridge: Intel Corporation Xeon 5600 Series Mirror Port Link 1 (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Mirror Port Link 1
fe:02.4 Host bridge: Intel Corporation Xeon 5600 Series QPI Link 1 (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series QPI Link 1
fe:02.5 Host bridge: Intel Corporation Xeon 5600 Series QPI Physical 1 (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series QPI Physical 1
fe:03.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Registers (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Registers
fe:03.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Target Address Decoder (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Target Address Decoder
fe:03.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller RAS Registers (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller RAS Registers
fe:03.4 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Test Registers (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Test Registers
fe:04.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Control (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Control
fe:04.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Address (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Address
fe:04.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Rank (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Rank
fe:04.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Thermal Control (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Thermal Control
fe:05.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Control (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Control
fe:05.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Address (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Address
fe:05.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Rank (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Rank
fe:05.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Thermal Control (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Thermal Control
fe:06.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Control (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Control
fe:06.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Address (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Address
fe:06.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Rank (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Rank
fe:06.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Thermal Control (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Thermal Control
ff:00.0 Host bridge: Intel Corporation Xeon 5600 Series QuickPath Architecture Generic Non-core Registers (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series QuickPath Architecture Generic Non-core Registers
ff:00.1 Host bridge: Intel Corporation Xeon 5600 Series QuickPath Architecture System Address Decoder (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series QuickPath Architecture System Address Decoder
ff:02.0 Host bridge: Intel Corporation Xeon 5600 Series QPI Link 0 (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series QPI Link 0
ff:02.1 Host bridge: Intel Corporation Xeon 5600 Series QPI Physical 0 (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series QPI Physical 0
ff:02.2 Host bridge: Intel Corporation Xeon 5600 Series Mirror Port Link 0 (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Mirror Port Link 0
ff:02.3 Host bridge: Intel Corporation Xeon 5600 Series Mirror Port Link 1 (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Mirror Port Link 1
ff:02.4 Host bridge: Intel Corporation Xeon 5600 Series QPI Link 1 (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series QPI Link 1
ff:02.5 Host bridge: Intel Corporation Xeon 5600 Series QPI Physical 1 (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series QPI Physical 1
ff:03.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Registers (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Registers
ff:03.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Target Address Decoder (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Target Address Decoder
ff:03.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller RAS Registers (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller RAS Registers
ff:03.4 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Test Registers (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Test Registers
ff:04.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Control (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Control
ff:04.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Address (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Address
ff:04.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Rank (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Rank
ff:04.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Thermal Control (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Thermal Control
ff:05.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Control (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Control
ff:05.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Address (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Address
ff:05.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Rank (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Rank
ff:05.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Thermal Control (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Thermal Control
ff:06.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Control (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Control
ff:06.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Address (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Address
ff:06.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Rank (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Rank
ff:06.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Thermal Control (rev 02)
Subsystem: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Thermal Control
root@pve:~#
 
I don't understand the first thing you want me to as I always have done: zpool import -f Goose

Never by individuals?

Secondly, All drives seem to be there? I have one 1TB bootable and 4 x 4TB drives. The text document is how I've always "passed them through" and the command used to do so at the "node level" NOT in the VM level when passing through. Pictures attached.
 

Attachments

  • ProxMoxPool4.PNG
    ProxMoxPool4.PNG
    57.8 KB · Views: 8
  • ProxMoxPool5.PNG
    ProxMoxPool5.PNG
    8 KB · Views: 8