Change zpool devices from sata sdX to by-id/ata-* (not by-id/wwn*)

sargue

Active Member
Mar 17, 2019
16
1
43
48
So I did some replacement and upgrades of drives on a PVE node using ZFS and some hot swap enclosures. Everything went fine until the next reboot. Suddendly the zfs pool was degraded. The problem was some drives changing names in the form /dev/sda and so on. That's because I left a "hole" and I had like sda, sdb and sdd but on reboot they became sda, sdb and sdc. ZFS won't accept the change.

So after some digging I found out that the recommended way to name devices is using /dev/by-id so I uncommented the ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id" line from /etc/default/zfs and did an export/import. Indeed now the zpool looks like this.

Code:
  pool: pool1
 state: ONLINE
  scan: scrub repaired 0B in 0 days 11:46:23 with 0 errors on Wed Jan 22 00:32:01 2020
config:

        NAME                        STATE     READ WRITE CKSUM
        pool1                       ONLINE       0     0     0
          mirror-0                  ONLINE       0     0     0
            wwn-0x500003992bb007f4  ONLINE       0     0     0
            wwn-0x50000399ac602417  ONLINE       0     0     0
          mirror-1                  ONLINE       0     0     0
            wwn-0x5000c500a4cd8267  ONLINE       0     0     0
            wwn-0x500003993b7003b0  ONLINE       0     0     0
          mirror-2                  ONLINE       0     0     0
            wwn-0x500003992bd8088d  ONLINE       0     0     0
            wwn-0x500003992ba00c20  ONLINE       0     0     0

Which I guess is more robust but those identifiers are pretty hard to correlate to drives so I would prefer to use the ata-* variants like ata-ST6000VN0041-ABC123_ZXYA1B2. But somehow this WWN identifiers have more priority.

What's your recommendation? Is it possible to force the usage of the make-model-serial string?
 
Hi,
you could try doing an export and then importing with
Code:
zpool import -d /dev/disk/by-id/ata-<ID1> -d /dev/disk/by-id/ata-<ID2> <rest of the -d options> pool1
where you need to specify one -d /path/to/device per device.
 
I've already tried (without the /dev/disk/by-id prefix, just ata- ....

I've just now tried again with the prefix. It's not working. By the way, there's a mechanism to auto-import pools somehow? If I export it it kind of imports itself after some seconds.

Look at this.

Code:
root@pve-shark2:~# zpool export pool1
root@pve-shark2:~# zpool status
no pools available
root@pve-shark2:~# zpool status
no pools available
root@pve-shark2:~# zpool status
no pools available
root@pve-shark2:~# zpool status
  pool: pool1
 state: ONLINE
  scan: scrub repaired 0B in 0 days 11:46:23 with 0 errors on Wed Jan 22 00:32:01 2020
config:

        NAME                        STATE     READ WRITE CKSUM
        pool1                       ONLINE       0     0     0
          mirror-0                  ONLINE       0     0     0
            wwn-0x500003992bb007f4  ONLINE       0     0     0
            wwn-0x50000399ac602417  ONLINE       0     0     0
          mirror-1                  ONLINE       0     0     0
            wwn-0x5000c500a4cd8267  ONLINE       0     0     0
            wwn-0x500003993b7003b0  ONLINE       0     0     0
          mirror-2                  ONLINE       0     0     0
            wwn-0x500003992bd8088d  ONLINE       0     0     0
            wwn-0x500003992ba00c20  ONLINE       0     0     0

errors: No known data errors
 
If your pool is configured as a storage in PVE, then PVE will automatically import the pool again. So you'll have to disable the storage with that pool first and re-enable it afterwards.
I guess you have to specify the links to the partitions when you specify the full path.

Example (testpool is on sdf and only has this single disk):
Code:
root@rob0 ~ # zpool status testpool
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME                                    STATE     READ WRITE CKSUM
        testpool                                ONLINE       0     0     0
          scsi-0QEMU_QEMU_HARDDISK_drive-scsi6  ONLINE       0     0     0

errors: No known data errors
root@rob0 ~ # zpool export testpool
root@rob0 ~ # zpool status testpool
cannot open 'testpool': no such pool

You can skip the following step (since I'm using a virtual disk I have to create the link):
Code:
root@rob0 ~ # ln -s /dev/sdf1 /dev/disk/by-id/ata-myid-1

And now import specifying the full path:
Code:
root@rob0 ~ # zpool import -d /dev/disk/by-id/ata-myid-1 testpool
root@rob0 ~ # zpool status testpool
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        testpool      ONLINE       0     0     0
          ata-myid-1  ONLINE       0     0     0

errors: No known data errors

Of course you have to specify all your device paths. And you might want to test if the new path is persistent across boots.
 
Yes, the storage is on configured on PVE. I will disable it in order to change it.

About using the partition links instead of the whole disk... which partition?

When I created the pool I used whole disks. Actually I think I used the GUI, but anyway it showed as full disk (aka sda, sdb...). Now each disk shows two partitions and there are three links per disk under /dev/disk/by-ide/ata-*, without suffix for the whole disk and -part1 and -part9. Which one should I use? Honestly, it doesn't feel right. How should I add a new disk in the future if not by putting the whole disk?

Anyway I wanted to test export/importing with the storage disabled on PVE.

Now I cannot import it, I don't know why.

Code:
root@pve-shark2:~# zpool status
no pools available
root@pve-shark2:~# zpool import
   pool: pool1
     id: 752820709138909381
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        pool1                       ONLINE
          mirror-0                  ONLINE
            wwn-0x500003992bb007f4  ONLINE
            wwn-0x50000399ac602417  ONLINE
          mirror-1                  ONLINE
            wwn-0x5000c500a4cd8267  ONLINE
            wwn-0x500003993b7003b0  ONLINE
          mirror-2                  ONLINE
            wwn-0x500003992bd8088d  ONLINE
            wwn-0x500003992ba00c20  ONLINE
root@pve-shark2:~# zpool import -d /dev/disk/by-id/ata-ST6000VN0041-2EL11C_ZA192Q78 -d /dev/disk/by-id/ata-TOSHIBA_HDWN160_4921K0E4FAXG -d /dev/disk/by-id/ata-TOSHIBA_HDWN180_3997K0JDFAVG -d /dev/disk/by-id/ata-TOSHIBA_HDWN180_3999K0FBFAVG -d /dev/disk/by-id/ata-TOSHIBA_HDWN180_399EK0FCFAVG -d /dev/disk/by-id/ata-TOSHIBA_HDWN180_Y9OVK15CFAVG
no pools available to import
root@pve-shark2:~# zpool import -d /dev/disk/by-id/ata-ST6000VN0041-2EL11C_ZA192Q78 -d /dev/disk/by-id/ata-TOSHIBA_HDWN160_4921K0E4FAXG -d /dev/disk/by-id/ata-TOSHIBA_HDWN180_3997K0JDFAVG -d /dev/disk/by-id/ata-TOSHIBA_HDWN180_3999K0FBFAVG -d /dev/disk/by-id/ata-TOSHIBA_HDWN180_399EK0FCFAVG -d /dev/disk/by-id/ata-TOSHIBA_HDWN180_Y9OVK15CFAVG pool1
cannot import 'pool1': no such pool available
root@pve-shark2:~# zpool import -d ata-ST6000VN0041-2EL11C_ZA192Q78 -d ata-TOSHIBA_HDWN160_4921K0E4FAXG -d ata-TOSHIBA_HDWN180_3997K0JDFAVG -d ata-TOSHIBA_HDWN180_3999K0FBFAVG -d ata-TOSHIBA_HDWN180_399EK0FCFAVG -d ata-TOSHIBA_HDWN180_Y9OVK15CFAVG
no pools available to import
root@pve-shark2:~# zpool import -d ata-ST6000VN0041-2EL11C_ZA192Q78 -d ata-TOSHIBA_HDWN160_4921K0E4FAXG -d ata-TOSHIBA_HDWN180_3997K0JDFAVG -d ata-TOSHIBA_HDWN180_3999K0FBFAVG -d ata-TOSHIBA_HDWN180_399EK0FCFAVG -d ata-TOSHIBA_HDWN180_Y9OVK15CFAVG pool1
cannot import 'pool1': no such pool available
 
Now each disk shows two partitions and there are three links per disk under /dev/disk/by-ide/ata-*, without suffix for the whole disk and -part1 and -part9. Which one should I use? Honestly, it doesn't feel right. How should I add a new disk in the future if not by putting the whole disk?


Even if you have add a hdd to a zfs pool as a "disk" in the back stage you will use only partions. Adding a disk it will make 2-3 partiontions like this:

- one with about 8 mb size (because even with the same hdd size, not all hdds have the same capacity, could be a very small difference)
- one with uefi(if you have uefi boot and not mbr)
- one who have the biggest size

So you need to add in zfs pool the biggest partition ;)


Good luck / Bafta.
 
  • Like
Reactions: fiona
Yes, the storage is on configured on PVE. I will disable it in order to change it.

About using the partition links instead of the whole disk... which partition?

When I created the pool I used whole disks. Actually I think I used the GUI, but anyway it showed as full disk (aka sda, sdb...). Now each disk shows two partitions and there are three links per disk under /dev/disk/by-ide/ata-*, without suffix for the whole disk and -part1 and -part9. Which one should I use? Honestly, it doesn't feel right. How should I add a new disk in the future if not by putting the whole disk?

Anyway I wanted to test export/importing with the storage disabled on PVE.

Now I cannot import it, I don't know why.

The problem is that if you use -d and point to the whole disk, ZFS will not recognize the pool, since it resides on one of the partitions (that ZFS itself created).
You should always be able to import with one of the following
Code:
zpool import pool1
zpool -d /dev/disk/by-id pool1

But to try and get the ata- prefixes, use your favorite tool to determine which partition really contains the filesystem and try again with the -partN suffixes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!