Hello everyone,
I installed Proxmox (7.2-4) using ZFS (RAID1) and one of the two drives have failed. I thought Proxmox would use both entire disks for the ZFS rpool during the installation but it appears Proxmox created three partitions and included only one partition in the ZFS rpool. With that said, it would seem this would NOT work:
/dev/sda (ata-CT2000BX500SSD1_2207E60BC64C) is the new disk.
Here is the zpool status:
Disks By ID (not sure why there are nvme-eui.*** partitions or what wwn-0x500a0751e60bc64c is?)
Any idea why all three partitions are not in the ZFS rpool and how I should go about properly replacing the failed disk?
I installed Proxmox (7.2-4) using ZFS (RAID1) and one of the two drives have failed. I thought Proxmox would use both entire disks for the ZFS rpool during the installation but it appears Proxmox created three partitions and included only one partition in the ZFS rpool. With that said, it would seem this would NOT work:
Bash:
zpool replace rpool /dev/disk/by-id/ata-CT2000BX500SSD1_2133E5C0D3BF-part3 /dev/disk/by-id/ata-CT2000BX500SSD1_2207E60BC64C
Here is the zpool status:
Bash:
root@proxmox1:~# zpool status -v
pool: rpool
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
scan: scrub repaired 0B in 00:42:31 with 0 errors on Sun Mar 13 01:06:32 2022
config:
NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
nvme-CT2000P2SSD8_2151E5F38981-part3 ONLINE 0 0 0
8616914082538993740 UNAVAIL 0 0 0 was /dev/disk/by-id/ata-CT2000BX500SSD1_2133E5C0D3BF-part3
Disks By ID (not sure why there are nvme-eui.*** partitions or what wwn-0x500a0751e60bc64c is?)
Bash:
root@proxmox1:~# ls -al /dev/disk/by-id
lrwxrwxrwx 1 root root 9 May 21 13:29 ata-CT2000BX500SSD1_2207E60BC64C -> ../../sda
lrwxrwxrwx 1 root root 13 May 21 13:29 nvme-CT2000P2SSD8_2151E5F38981 -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 May 21 13:29 nvme-CT2000P2SSD8_2151E5F38981-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 May 21 13:29 nvme-CT2000P2SSD8_2151E5F38981-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 15 May 21 13:29 nvme-CT2000P2SSD8_2151E5F38981-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root 13 May 21 13:29 nvme-eui.6479a75a30000060 -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 May 21 13:29 nvme-eui.6479a75a30000060-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 May 21 13:29 nvme-eui.6479a75a30000060-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 15 May 21 13:29 nvme-eui.6479a75a30000060-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root 9 May 21 13:29 wwn-0x500a0751e60bc64c -> ../../sda
Any idea why all three partitions are not in the ZFS rpool and how I should go about properly replacing the failed disk?
Last edited: