Confused On ZFS Failed Disk Replacement Because Proxmox Created Three Partitions

mhayhurst

Active Member
Jul 21, 2016
98
4
28
41
Hello everyone,

I installed Proxmox (7.2-4) using ZFS (RAID1) and one of the two drives have failed. I thought Proxmox would use both entire disks for the ZFS rpool during the installation but it appears Proxmox created three partitions and included only one partition in the ZFS rpool. With that said, it would seem this would NOT work:

Bash:
zpool replace rpool /dev/disk/by-id/ata-CT2000BX500SSD1_2133E5C0D3BF-part3 /dev/disk/by-id/ata-CT2000BX500SSD1_2207E60BC64C
/dev/sda (ata-CT2000BX500SSD1_2207E60BC64C) is the new disk.

Here is the zpool status:

Bash:
root@proxmox1:~# zpool status -v
  pool: rpool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: scrub repaired 0B in 00:42:31 with 0 errors on Sun Mar 13 01:06:32 2022
config:

        NAME                                      STATE     READ WRITE CKSUM
        rpool                                     DEGRADED     0     0     0
          mirror-0                                DEGRADED     0     0     0
            nvme-CT2000P2SSD8_2151E5F38981-part3  ONLINE       0     0     0
            8616914082538993740                   UNAVAIL      0     0     0  was /dev/disk/by-id/ata-CT2000BX500SSD1_2133E5C0D3BF-part3



Disks By ID (not sure why there are nvme-eui.*** partitions or what wwn-0x500a0751e60bc64c is?)

Bash:
root@proxmox1:~# ls -al /dev/disk/by-id

lrwxrwxrwx 1 root root   9 May 21 13:29 ata-CT2000BX500SSD1_2207E60BC64C -> ../../sda
lrwxrwxrwx 1 root root  13 May 21 13:29 nvme-CT2000P2SSD8_2151E5F38981 -> ../../nvme0n1
lrwxrwxrwx 1 root root  15 May 21 13:29 nvme-CT2000P2SSD8_2151E5F38981-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root  15 May 21 13:29 nvme-CT2000P2SSD8_2151E5F38981-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root  15 May 21 13:29 nvme-CT2000P2SSD8_2151E5F38981-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root  13 May 21 13:29 nvme-eui.6479a75a30000060 -> ../../nvme0n1
lrwxrwxrwx 1 root root  15 May 21 13:29 nvme-eui.6479a75a30000060-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root  15 May 21 13:29 nvme-eui.6479a75a30000060-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root  15 May 21 13:29 nvme-eui.6479a75a30000060-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root   9 May 21 13:29 wwn-0x500a0751e60bc64c -> ../../sda

Any idea why all three partitions are not in the ZFS rpool and how I should go about properly replacing the failed disk?
 
Last edited:

Dunuin

Famous Member
Jun 30, 2020
6,716
1,557
149
Germany
Hello everyone,

I installed Proxmox (7.2-4) using ZFS (RAID1) and one of the two drives have failed. I thought Proxmox would use both entire disks for the ZFS rpool during the installation but it appears Proxmox created three partitions and included only one partition in the ZFS rpool. With that said, it would seem this would NOT work:

Bash:
zpool replace rpool /dev/disk/by-id/ata-CT2000BX500SSD1_2133E5C0D3BF-part3 /dev/disk/by-id/ata-CT2000BX500SSD1_2207E60BC64C
/dev/sda (ata-CT2000BX500SSD1_2207E60BC64C) is the new disk.

Here is the zpool status:

Bash:
root@proxmox1:~# zpool status -v
  pool: rpool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: scrub repaired 0B in 00:42:31 with 0 errors on Sun Mar 13 01:06:32 2022
config:

        NAME                                      STATE     READ WRITE CKSUM
        rpool                                     DEGRADED     0     0     0
          mirror-0                                DEGRADED     0     0     0
            nvme-CT2000P2SSD8_2151E5F38981-part3  ONLINE       0     0     0
            8616914082538993740                   UNAVAIL      0     0     0  was /dev/disk/by-id/ata-CT2000BX500SSD1_2133E5C0D3BF-part3



Disks By ID (not sure why there are nvme-eui.*** partitions or what wwn-0x500a0751e60bc64c is?)

Bash:
root@proxmox1:~# ls -al /dev/disk/by-id

lrwxrwxrwx 1 root root   9 May 21 13:29 ata-CT2000BX500SSD1_2207E60BC64C -> ../../sda
lrwxrwxrwx 1 root root  13 May 21 13:29 nvme-CT2000P2SSD8_2151E5F38981 -> ../../nvme0n1
lrwxrwxrwx 1 root root  15 May 21 13:29 nvme-CT2000P2SSD8_2151E5F38981-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root  15 May 21 13:29 nvme-CT2000P2SSD8_2151E5F38981-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root  15 May 21 13:29 nvme-CT2000P2SSD8_2151E5F38981-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root  13 May 21 13:29 nvme-eui.6479a75a30000060 -> ../../nvme0n1
lrwxrwxrwx 1 root root  15 May 21 13:29 nvme-eui.6479a75a30000060-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root  15 May 21 13:29 nvme-eui.6479a75a30000060-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root  15 May 21 13:29 nvme-eui.6479a75a30000060-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root   9 May 21 13:29 wwn-0x500a0751e60bc64c -> ../../sda

Any idea why all three partitions are not in the ZFS rpool and how I should go about properly replacing the failed disk?
Only 3rd partition is used for your ZFS pool because the other two are needed to boot from and this wouldn't be possible with ZFS.
See chapter "Changing a failed bootable device" for how to replace the disk: https://pve.proxmox.com/wiki/ZFS_on_Linux#_zfs_administration
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!