Hi!
I currently playing around with ZFS and noticed a very odd behavior.
Pool's name is storage and my aliases in vdev is vdisk1 to vdisk6.
If I create a new pool and uses /by-id or /by-vdev then the pool is create as it should and zpool status storage shows the Id's or aliases used.
But if I do a zpool export pool and zpool import storage suddenly some of my disks are listed as sdb sdc .. and so on.
But if I do a zpool import -d /dev/disk/by-vdev/ storage then all is fine again!
I don't quite see what have changed for zfs to import wrongly, so my question is: Why is this happening and is the any way I can avoid it?
Is is possible to make it so zfs used -d /dev/disk/by-vdev/ at startup so the pool is started with the right devices?
I currently playing around with ZFS and noticed a very odd behavior.
Pool's name is storage and my aliases in vdev is vdisk1 to vdisk6.
If I create a new pool and uses /by-id or /by-vdev then the pool is create as it should and zpool status storage shows the Id's or aliases used.
Code:
#zpool status storage
pool: storage
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
vdisk1 ONLINE 0 0 0
vdisk2 ONLINE 0 0 0
vdisk3 ONLINE 0 0 0
vdisk4 ONLINE 0 0 0
vdisk5 ONLINE 0 0 0
vdisk6 ONLINE 0 0 0
But if I do a zpool export pool and zpool import storage suddenly some of my disks are listed as sdb sdc .. and so on.
Code:
#zpool status storage
pool: storage
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
vdisk2 ONLINE 0 0 0
sdc ONLINE 0 0 0
sde ONLINE 0 0 0
vdisk5 ONLINE 0 0 0
sdg ONLINE 0 0 0
errors: No known data errors
I don't quite see what have changed for zfs to import wrongly, so my question is: Why is this happening and is the any way I can avoid it?
Is is possible to make it so zfs used -d /dev/disk/by-vdev/ at startup so the pool is started with the right devices?