Hi
I booted from a live CD and imported rpool to perform recovery operations on an already mounted partition.
After that, when I booted into PVE, one of my drives crashed.
Is the drive OK How do I correctly re-enable it in rpool and add an ID name like the other drives?
After reading, they recommend booting from a live CD again and then exporting and importing. Is there an easier way?
I booted from a live CD and imported rpool to perform recovery operations on an already mounted partition.
After that, when I booted into PVE, one of my drives crashed.
Code:
zpool status
pool: rpool
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
scan: scrub repaired 0B in 06:20:20 with 0 errors on Sun Oct 12 06:44:21 2025
config:
NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
nvme-eui.36344830583444940025384e00000001-part3 ONLINE 0 0 0
nvme-eui.36344830583444920025384e00000001-part3 ONLINE 0 0 0
nvme-eui.36344830583444880025384e00000001-part3 ONLINE 0 0 0
nvme-eui.36344830583444970025384e00000001-part3 ONLINE 0 0 0
nvme-eui.36344830583444960025384e00000001-part3 ONLINE 0 0 0
nvme-eui.36344830583444950025384e00000001-part3 ONLINE 0 0 0
nvme-eui.36344830583444840025384e00000001-part3 ONLINE 0 0 0
4507037677091464003 FAULTED 0 0 0 was /dev/nvme7n1p3
errors: No known data errors
Code:
zdb
rpool:
version: 5000
name: 'rpool'
state: 0
txg: 7680916
pool_guid: 16987673894477835616
errata: 0
hostid: 4228784363
hostname: 'cloud-v002'
com.delphix:has_per_vdev_zaps
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 16987673894477835616
create_txg: 4
com.klarasystems:vdev_zap_root: 129
children[0]:
type: 'raidz'
id: 0
guid: 6309061694612082912
nparity: 2
metaslab_array: 139
metaslab_shift: 34
ashift: 12
asize: 30717411065856
is_log: 0
create_txg: 4
com.delphix:vdev_zap_top: 130
children[0]:
type: 'disk'
id: 0
guid: 9808093091410395546
path: '/dev/disk/by-id/nvme-eui.36344830583444940025384e00000001-part3'
vdev_enc_sysfs_path: '/sys/bus/pci/slots/4'
whole_disk: 0
DTL: 109016
create_txg: 4
com.delphix:vdev_zap_leaf: 131
children[1]:
type: 'disk'
id: 1
guid: 12412313369067703708
path: '/dev/disk/by-id/nvme-eui.36344830583444920025384e00000001-part3'
vdev_enc_sysfs_path: '/sys/bus/pci/slots/12'
whole_disk: 0
DTL: 109015
create_txg: 4
com.delphix:vdev_zap_leaf: 132
children[2]:
type: 'disk'
id: 2
guid: 14627112648788585688
path: '/dev/disk/by-id/nvme-eui.36344830583444880025384e00000001-part3'
vdev_enc_sysfs_path: '/sys/bus/pci/slots/3'
whole_disk: 0
DTL: 109014
create_txg: 4
com.delphix:vdev_zap_leaf: 133
children[3]:
type: 'disk'
id: 3
guid: 5227534298990620931
path: '/dev/disk/by-id/nvme-eui.36344830583444970025384e00000001-part3'
vdev_enc_sysfs_path: '/sys/bus/pci/slots/11'
whole_disk: 0
DTL: 109013
create_txg: 4
com.delphix:vdev_zap_leaf: 134
children[4]:
type: 'disk'
id: 4
guid: 1011566563601879184
path: '/dev/disk/by-id/nvme-eui.36344830583444960025384e00000001-part3'
vdev_enc_sysfs_path: '/sys/bus/pci/slots/2'
whole_disk: 0
DTL: 109012
create_txg: 4
com.delphix:vdev_zap_leaf: 135
children[5]:
type: 'disk'
id: 5
guid: 7182381524902485433
path: '/dev/disk/by-id/nvme-eui.36344830583444950025384e00000001-part3'
vdev_enc_sysfs_path: '/sys/bus/pci/slots/10'
whole_disk: 0
DTL: 109011
create_txg: 4
com.delphix:vdev_zap_leaf: 136
children[6]:
type: 'disk'
id: 6
guid: 6474831926635961944
path: '/dev/disk/by-id/nvme-eui.36344830583444840025384e00000001-part3'
vdev_enc_sysfs_path: '/sys/bus/pci/slots/1'
whole_disk: 0
DTL: 109010
create_txg: 4
com.delphix:vdev_zap_leaf: 137
children[7]:
type: 'disk'
id: 7
guid: 4507037677091464003
path: '/dev/nvme7n1p3'
vdev_enc_sysfs_path: '/sys/bus/pci/slots/2'
whole_disk: 0
not_present: 1
DTL: 90305
create_txg: 4
expansion_time: 1761130666
com.delphix:vdev_zap_leaf: 87501
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
com.klarasystems:vdev_zaps_v2
ZFS_DBGMSG(zdb) START:
metaslab.c:1703:spa_set_allocator(): spa allocator: dynamic
ZFS_DBGMSG(zdb) END
Is the drive OK How do I correctly re-enable it in rpool and add an ID name like the other drives?
After reading, they recommend booting from a live CD again and then exporting and importing. Is there an easier way?