Balance the wear of multiple drives in a RAIDZ-1 mirror

Dec 6, 2021
35
3
13
46
Hi,

We are seeing unbalance wear on the SSDs of our two PVE nodes running PVE 6.4-13.

Node A (master) has :

  1. NVMe 500Gb SSD, wear 5%
  2. NVMe 500Gb SSD, wear 18%
  3. SATA 4T SSD, wear 5%
  4. SATA 4T SSD, wear 5%
  5. SATA 4T SSD, wear 5%
  6. SATA 4T SSD, wear 4%
  7. SATA 4T SSD, wear 4%
  • rpool using 1+2 in mirror
  • local-vms using 3,4,5,6,7 in RAIDZ and 1,2's partitions as ZIL+CACHE

Code:
 state: ONLINE
config:

    NAME                                                    STATE     READ WRITE CKSUM
    local-vms                                               ONLINE       0     0     0
      raidz1-0                                              ONLINE       0     0     0
        wwn-0x5002538e49a728ee                              ONLINE       0     0     0
        wwn-0x5002538e19834f3d                              ONLINE       0     0     0
        wwn-0x5002538e49a728ea                              ONLINE       0     0     0
        wwn-0x5002538e09a0cddd                              ONLINE       0     0     0
        wwn-0x5002538e09a0d151                              ONLINE       0     0     0
    logs
      nvme-eui.0025385a91b011a8-part4                       ONLINE       0     0     0
    cache
      nvme-Samsung_SSD_970_PRO_512GB_S463NX0MA01189T-part4  ONLINE       0     0     0


  pool: rpool
 state: ONLINE
config:

    NAME                                 STATE     READ WRITE CKSUM
    rpool                                ONLINE       0     0     0
      mirror-0                           ONLINE       0     0     0
        nvme-eui.0025385a91b013d4-part3  ONLINE       0     0     0
        nvme-eui.0025385a91b011a8-part3  ONLINE       0     0     0


Node B (slave) has :

  1. SATA 500Gb SSD, wear 12%
  2. SATA 500Gb SSD, wear 3%
  3. SATA 4T HDD
  4. SATA 4T HDD
  5. SATA 4T HDD
  6. SATA 4T HDD
  7. SATA 4T HDD
  • rpool using 1+2 in mirror
  • local-vms 3,4,5,6,7 in RAIDZ and 1,2's partitions as ZIL+CACHE
Code:
root@slave:~# zpool status
  pool: local-vms
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    local-vms   ONLINE       0     0     0
      raidz1-0  ONLINE       0     0     0
        sda     ONLINE       0     0     0
        sdd     ONLINE       0     0     0
        sde     ONLINE       0     0     0
        sdf     ONLINE       0     0     0
        sdg     ONLINE       0     0     0
    logs
      sdc4      ONLINE       0     0     0
    cache
      sdb4      ONLINE       0     0     0

  pool: rpool
 state: ONLINE
config:

    NAME                                                     STATE     READ WRITE CKSUM
    rpool                                                    ONLINE       0     0     0
      mirror-0                                               ONLINE       0     0     0
        ata-Samsung_SSD_860_PRO_512GB_S42YNF0M913090M-part3  ONLINE       0     0     0
        ata-Samsung_SSD_860_PRO_512GB_S42YNF0M913074T-part3  ONLINE       0     0     0



Is there a way to balance the wear of A1 vs A2 and B1 vs. B2 to extend the lifetime of A2 and B1 ?

The only thing I can think of would be to swap the disks ports on the motherboard.
 
Last edited: