ZFS pool won’t import after switching from /dev/sdx to /dev/disk/by-id – mixed vdev paths

stingray2362

New Member
Jun 10, 2025
2
0
1
Hello everyone,

I have a ZFS1 pool made up of 10 disks. When I first created the pool, I only had 5 drives, and they formed the first vdev. At the time, I used /dev/sdx to set them up. Later on, I added another 5 drives to the pool and this time I used the /dev/disk/by-id path. So the pool ended up being a mix of both. I had no issue with this and was able to use whole pool's storage. All the disks are from the same manufacturer and have the same size (20TB).

I recently tried to fix this inconsistency by following this guide:
https://serverfault.com/questions/8...-in-a-zfs-pool-from-dev-sdx-to-dev-disk-by-id

I ran
Code:
zpool export pool-name
to export the pool. Initially, I got a "zpool is busy" message, but after stopping all my containers in Proxmox, I was able to successfully run the export command.

Next, I tried to import the pool using:
Code:
zpool import pool-name -d /dev/disk/by-id

But now the pool can't be found. The drives are all visible in /dev/disk/by-id, and by checking their SMART data, I can distinguish between the original 5 drives (the first vdev) and the 5 newer ones (the second vdev). My question is: is there a way to import the pool by explicitly specifying the disk IDs for both vdev0 and vdev1?

From my understanding, ZFS doesn’t store pool configuration based on how drives were referenced, but rather uses the metadata stored on the disks themselves. As one reddit user put it: For the record exporting a pool is akin to windows ejecting a usb drive. Can't destroy up a pool with an export command.
Have I managed to destroy the pool? :D
 
Hi,

have you tried to run 'zpool import -d /dev/disk/by-id' to see what zfs sees ?
Here is the output. I redacted some identifiers:

Code:
zpool import -d /dev/disk/by-id
   pool: my-pool
     id: 9362173134538127XXX
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        my-pool                                 ONLINE
          raidz1-0                                ONLINE
            ata-TOSHIBA_MG10ACA20TE_44F0A0GMXXXX  ONLINE
            ata-TOSHIBA_MG10ACA20TE_44F0A0A5XXXX  ONLINE
            ata-TOSHIBA_MG10ACA20TE_44F0A0CNXXXX  ONLINE
            ata-TOSHIBA_MG10ACA20TE_44F0A0CTXXXX  ONLINE
            ata-TOSHIBA_MG10ACA20TE_44F0A0GFXXXX  ONLINE
          raidz1-1                                ONLINE
            wwn-0x5000039d78e02XXX                ONLINE
            wwn-0x5000039d88c93XXX                ONLINE
            wwn-0x5000039d88c96XXX                ONLINE
            wwn-0x5000039d88c96XXX                ONLINE
            wwn-0x5000039d88c96XXX                ONLINE

   pool: my-pool
     id: 4156001255627227XXX
  state: UNAVAIL
status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        my-pool                         UNAVAIL  insufficient replicas
          raidz1-0                        UNAVAIL  insufficient replicas
            wwn-0x5000039d38ca53XX-part9  UNAVAIL  corrupted data
            wwn-0x5000039d38ca49XX-part9  UNAVAIL  corrupted data
            wwn-0x5000039d38ca4bXX-part9  UNAVAIL  corrupted data
            wwn-0x5000039d38ca4bXX-part9  UNAVAIL  corrupted data
            wwn-0x5000039d38ca53XX-part9  UNAVAIL  corrupted data

Edit:
I managed to import the pool with the following command:

Code:
zpool import -d /dev/disk/by-id 9362173134538127XXX

The pool is usable again. However, I’m still a bit confused—why does zpool import show two versions of the same pool name, with one being corrupt?

In the future, if I want to add more storage, can I simply add another vdev (in my case, a RAIDZ1 with 5 identical drives), but using the drive's id?
 
Last edited: