Could /sys/block/sd[x]/device be used as a work around?
that's not a block device, so no?
nothing beside the device nodes directly in /dev work there (the symlinks all exist and point to the right stuff, but zfs refuses to work with them..)
Could /sys/block/sd[x]/device be used as a work around?
ls -l /dev/disk/by-id/ |grep -v part |grep wwn
zpool create -f -o ashift=12 tank mirror wwn-0x55cd2e40xxxxxxxx wwn-0x55cd2e4088888888
this is not to disagree with any other suggestion,
for stretch i've found wwn's are the easiest for tracking with drive is which on a zfs or ceph setup.
Code:ls -l /dev/disk/by-id/ |grep -v part |grep wwn
then
Code:zpool create -f -o ashift=12 tank mirror wwn-0x55cd2e40xxxxxxxx wwn-0x55cd2e4088888888
is there a better way to ID drive for a mirror?
You can safely do it from initrd:
- reboot, and when grub displays the menu, press 'e' on the first line
- edit the linux line so it has break=mount at the end and then press ctrl-x
- when you get the promt, do 'modprobe zfs'
- now do a 'zpool import -d /dev/disk/by-id/ rpool'
- verify status with 'zpool status' - you should see device ids in place of the standard device names
- ctrl-d to continue booting
From now on you should see your pool based on device ids.
# zpool status
pool: rpool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://zfsonlinux.org/msg/ZFS-8000-9P
scan: scrub repaired 0B in 0h0m with 0 errors on Thu Nov 9 11:35:01 2017
config:
NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 6
raidz2-0 DEGRADED 0 0 12
pci-0000:03:00.0-sas-phy0-lun-0-part2 ONLINE 0 137 0
pci-0000:03:00.0-sas-phy1-lun-0-part2 ONLINE 0 0 3
pci-0000:03:00.0-sas-phy2-lun-0-part2 OFFLINE 0 0 1
pci-0000:03:00.0-sas-phy3-lun-0-part2 ONLINE 0 0 0
errors: No known data errors
# sgdisk --replicate /dev/disk/by-path/pci-0000\:03\:00.0-sas-phy3-lun-0 /dev/disk/by-path/pci-0000\:03\:00.0-sas-phy2-lun-0
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
The operation has completed successfully.
# sgdisk --randomize-guids /dev/disk/by-path/pci-0000\:03\:00.0-sas-phy2-lun-0
The operation has completed successfully.
# zpool replace rpool /dev/disk/by-path/pci-0000\:03\:00.0-sas-phy2-lun-0
invalid vdev specification
use '-f' to override the following errors:
/dev/disk/by-path/pci-0000:03:00.0-sas-phy2-lun-0-part1 is part of active pool 'rpool'
# zpool replace -f rpool /dev/disk/by-path/pci-0000\:03\:00.0-sas-phy2-lun-0
invalid vdev specification
the following errors must be manually repaired:
/dev/disk/by-path/pci-0000:03:00.0-sas-phy2-lun-0-part1 is part of active pool 'rpool'
# zpool offline rpool /dev/disk/by-path/pci-0000\:03\:00.0-sas-phy2-lun-0-part1
cannot offline /dev/disk/by-path/pci-0000:03:00.0-sas-phy2-lun-0-part1: no such device in pool
have you tried using "zpool online rpool /dev/..." ? it seems you are trying to replace an offlined disk with itself. replace is for replacing with a NEW disk
# zpool online rpool /dev/disk/by-path/pci-0000\:03\:00.0-sas-phy2-lun-0
cannot online /dev/disk/by-path/pci-0000:03:00.0-sas-phy2-lun-0: no such device in pool
You can safely do it from initrd:
- reboot, and when grub displays the menu, press 'e' on the first line
- edit the linux line so it has break=mount at the end and then press ctrl-x
- when you get the promt, do 'modprobe zfs'
- now do a 'zpool import -d /dev/disk/by-id/ rpool'
- verify status with 'zpool status' - you should see device ids in place of the standard device names
- ctrl-d to continue booting
From now on you should see your pool based on device ids.
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0 days 00:02:22 with 0 errors on Sun Nov 10 00:26:23 2019
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
d00-part3 ONLINE 0 0 0
d01-part3 ONLINE 0 0 0
errors: No known data errors
alias d00 /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0
alias d01 /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0
alias d02 /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:2:0
alias d03 /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:3:0
alias d04 /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:4:0
alias d05 /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:5:0
alias d06 /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:6:0
alias d07 /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:7:0
alias d08 /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:8:0
alias d09 /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:9:0
alias d10 /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:10:0
alias d11 /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:11:0
Hi folks, by following this, but using vdev_id.conf, it did not import the whole disk but just the partition. It works but I'd like to know if it's ok and if there is a way to import the disk instead of the partition.
It is OK! Even if you create a pool using disks, in backstage zfs will create some partition and alocate one partion to the pool(the biggest one) from each disk!