How to import ZFS root pool by-id?

sirlaser

New Member
Jun 21, 2012
4
0
1
During Proxmox 4.0 installation, I install root on a mirrored vdev on rpool. The installer appears to have created the pool by disk assignment (/dev/sdx) instead of by-id. Is there any way to force a re-import of the root zpool by-id? I know for a storage pool one can simply:

Code:
[COLOR=#000000][FONT=Ubuntubeta]# zpool export rpool[/FONT][/COLOR]
[COLOR=#000000][FONT=Ubuntubeta]# zpool import -d /dev/disk/by-id rpool[/FONT][/COLOR]

Is this possible for a root zpool? Barring that, is it possible to change the zfs init.d script (or similar startup script) to mount the zpool by-id instead of by disk assignment?


In other words, zpool status returns this:

Code:
root@proxmox:/# zpool status  pool: rpool
 state: ONLINE
  scan: scrub repaired 0 in 0h3m with 0 errors on Mon Nov  2 18:56:29 2015
config:


        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdb2    ONLINE       0     0     0
            sdc2    ONLINE       0     0     0

How can I get it to return something like this:

Code:
root@proxmox:/# zpool status  pool: rpool
 state: ONLINE
  scan: scrub repaired 0 in 0h3m with 0 errors on Mon Nov  2 18:56:29 2015
config:

        NAME                                 STATE     READ WRITE CKSUM
        rpool                                ONLINE       0     0     0
          mirror-0                           ONLINE       0     0     0
            ata-ST4000DM000-1F2168_XXXXXXXX  ONLINE       0     0     0
            ata-ST4000DM000-1F2168_XXXXXXXX  ONLINE       0     0     0
 
newer zfs version use libblkid, and that always returns /dev/sdX - but that should work without problems.
You can also try to set ZPOOL_IMPORT_PATH in /etc/default/zfs.
 
  • Like
Reactions: Kingneutron
Just to make things clear in case other people have this problem too.
I edited the /etc/default/zfs and changed the ZPOOL_IMPORT_PATH from
Code:
 #ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"
to
Code:
 ZPOOL_IMPORT_PATH="/dev/disk/by-id"
And then I had to update the system's initramfs (by running update-initramfs -u). I also updated grub, but I don't think that mattered.
 
First of all, my apologies for resurrecting a 2 year old thread. I have a host running PVE 5.1 which is booting from a raid10 pool of 4x SAS drives. I currently have 2 additional storage pools (created by-id) attached to this host. After making the edit mentioned by dietmar and alchemycs and running update-initramfs -u and update-grub, I am still getting intermittent zpool import errors on boot. I believe this to be due to the fact that my rpool is now using the wwn-* ID for 3 of the 4 drives instead of the scsi-* ID.

Output from zpool status:
Code:
  pool: rpool
 state: ONLINE
  scan: none requested
config:

    NAME                              STATE     READ WRITE CKSUM
    rpool                             ONLINE       0     0     0
      mirror-0                        ONLINE       0     0     0
        wwn-0x5000c500092f2373-part2  ONLINE       0     0     0
        wwn-0x5000c5000bbc72bb-part2  ONLINE       0     0     0
      mirror-1                        ONLINE       0     0     0
        scsi-35000c5000a51112f        ONLINE       0     0     0
        wwn-0x5000c5000c67603b        ONLINE       0     0     0

errors: No known data errors

Additional notes: this is not a production server yet. I am still in the testing phase of my build. All VMs, Containers, and data on the host are backed up. I can reinstall PVE if necessary. I have been working around this issue by leaving all other disks powered down until after PVE successfully boots.

Any assistance is greatly appreciated. Thanks!
 
Last edited:
fisrt you can try sugestion by "kobuki" from this thread https://forum.proxmox.com/threads/zfs-raid-disks.34099/#post-167015

also, even though I have been told this is not a good way of doing it, I know it works even if a little slow.

run " ls -la /dev/disk/by-id "
get the id of proper partitions for both disks , usually it is partition 2 on normal PVE zfs install.
use the id starting with scsi- ??????-part2

so if you have sda and sdb
you will get something like

scsi-3600224801f5d0e04f975551b0804f975-> ../../sda
scsi-3600224801f5d0e04f975551b0804f975-part1 -> ../../sda1
scsi-3600224801f5d0e04f975551b0804f975-part2 -> ../../sda2
scsi-3600224801f5d0e04f975551b0804f975-part9 -> ../../sda9


scsi-3600224801f5d0e04f975551b1f5d0e04-> ../../sdb
scsi-3600224801f5d0e04f975551b1f5d0e04-part1 -> ../../sdb1
scsi-3600224801f5d0e04f975551b1f5d0e04-part2 -> ../../sdb2
scsi-3600224801f5d0e04f975551b1f5d0e04-part9 -> ../../sdb9

grab the ids for partition 2 as it is usually the main partition in the pool

scsi-3600224801f5d0e04f975551b0804f975-part2
scsi-3600224801f5d0e04f975551b1f5d0e04-part2

than do :

zpool detach rpool sda2
zpool attach rpool scsi-<Your sdX partition id>-part2

wait for resilvering to finish
and do the second one.

when using mirrored pool (raid1) it will allow you to detach/attach the vdev on a live pool with no issue.
just make sure the server stais up during the process (a UPS is a must)

since I do not like messing with initrd and since I can do all of this via ssh it works out better for me. even though it takes a little longer. but on a new setup with small OS drives it is not that long to wait.

also in contrast to other method I have a full control on what exactly I am using for config. rather than depending on what import will do. I am giving the ZFS exact ID I want to use for each vdev.
 
  • Like
Reactions: chrcoluk and Homer
fisrt you can try sugestion by "kobuki" from this thread https://forum.proxmox.com/threads/zfs-raid-disks.34099/#post-167015

also, even though I have been told this is not a good way of doing it, I know it works even if a little slow.

run " ls -la /dev/disk/by-id "
get the id of proper partitions for both disks , usually it is partition 2 on normal PVE zfs install.
use the id starting with scsi- ??????-part2

so if you have sda and sdb
you will get something like

scsi-3600224801f5d0e04f975551b0804f975-> ../../sda
scsi-3600224801f5d0e04f975551b0804f975-part1 -> ../../sda1
scsi-3600224801f5d0e04f975551b0804f975-part2 -> ../../sda2
scsi-3600224801f5d0e04f975551b0804f975-part9 -> ../../sda9


scsi-3600224801f5d0e04f975551b1f5d0e04-> ../../sdb
scsi-3600224801f5d0e04f975551b1f5d0e04-part1 -> ../../sdb1
scsi-3600224801f5d0e04f975551b1f5d0e04-part2 -> ../../sdb2
scsi-3600224801f5d0e04f975551b1f5d0e04-part9 -> ../../sdb9

grab the ids for partition 2 as it is usually the main partition in the pool

scsi-3600224801f5d0e04f975551b0804f975-part2
scsi-3600224801f5d0e04f975551b1f5d0e04-part2

than do :

zpool detach rpool sda2
zpool attach rpool scsi-<Your sdX partition id>-part2

wait for resilvering to finish
and do the second one.

when using mirrored pool (raid1) it will allow you to detach/attach the vdev on a live pool with no issue.
just make sure the server stais up during the process (a UPS is a must)

since I do not like messing with initrd and since I can do all of this via ssh it works out better for me. even though it takes a little longer. but on a new setup with small OS drives it is not that long to wait.

also in contrast to other method I have a full control on what exactly I am using for config. rather than depending on what import will do. I am giving the ZFS exact ID I want to use for each vdev.

Great info Jim. Thank you!

*update* The "Kobuki" method yielded the exact same results as the "alchemycs" method did; 3 of 4 disks are still being ID'd by their wwm-* ID. Also, zpool status output is the same as posted above. Between the two methods mentioned, I would recommend the "alchemycs" method as it was much simpler and was easy to accomplish from the command line. However, I have not been able to recreate the zpool import error at boot after multiple host reboots in the last 24 hours.
 
Last edited:
Just to make things clear in case other people have this problem too.
I edited the /etc/default/zfs and changed the ZPOOL_IMPORT_PATH from
Code:
 #ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"
to
Code:
 ZPOOL_IMPORT_PATH="/dev/disk/by-id"
And then I had to update the system's initramfs (by running update-initramfs -u). I also updated grub, but I don't think that mattered.
I added that line into the file, and did the "update-initramfs -u", I didn't update grub, and I got the rpool to have the "friendly" disk names. It sorta worked. I don't know why I got one wwn name and the other two have the brand names. My other pools also now have wwn names. At least now when I add in disks, the pools are happy.:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5f8db4c085123293-part3 ONLINE 0 0 0
ata-TEAM_L3_SSD_120GB_3E6D07681F9132148518-part3 ONLINE 0 0 0
ata-TEAM_L3_SSD_120GB_17AD07681A8432100294-part3 ONLINE 0 0 0
 
  • Like
Reactions: Kingneutron