ZFS Spare Device - Label anstelle der NGUID

linushstge

Active Member
Dec 5, 2019
72
10
28
Hallo zusammen, beim Setup eines neuen Proxmox Nodes (7.3) ist folgendes Verhalten aufgefallen.

Der Proxmox Installer verwendet beim ZFS Pool erstellen idealerweise bereits die NVMe NGUID anstelle der Labels.
Anschließend werden nach der Installation zusätzliche Spare Devices ebenfalls via NGUID hinzugefügt:

Code:
zpool add rpool spare /dev/disk/by-id/nvme-eui.000000000000000100a075223c09e6d8
zpool add rpool spare /dev/disk/by-id/nvme-eui.000000000000000100a075223c8fe7b7

Der zpool Status adressiert die zwei zusätzlichen Spare Devices korrekt via NGUID:

Bash:
# zpool status

pool: rpool
state: ONLINE
config:

        NAME                                                 STATE     READ WRITE CKSUM
        rpool                                                ONLINE       0     0     0
          mirror-0                                           ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c4a422a-part3  ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c898cd3-part3  ONLINE       0     0     0
          mirror-1                                           ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c917e30-part3  ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c4a420c-part3  ONLINE       0     0     0
          mirror-2                                           ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c4a4228-part3  ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c4a41dd-part3  ONLINE       0     0     0
          mirror-3                                           ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c4a4201-part3  ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c4a4204-part3  ONLINE       0     0     0
          mirror-4                                           ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c4a36f9-part3  ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c4a4661-part3  ONLINE       0     0     0
        spares
          nvme-eui.000000000000000100a075223c09e6d8          AVAIL
          nvme-eui.000000000000000100a075223c8fe7b7          AVAIL

Nach einem Reboot sieht der identische Zpool Status Befehl wie folgt aus:

Bash:
# zpool status

pool: rpool
state: ONLINE
config:

        NAME                                                 STATE     READ WRITE CKSUM
        rpool                                                ONLINE       0     0     0
          mirror-0                                           ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c4a422a-part3  ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c898cd3-part3  ONLINE       0     0     0
          mirror-1                                           ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c917e30-part3  ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c4a420c-part3  ONLINE       0     0     0
          mirror-2                                           ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c4a4228-part3  ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c4a41dd-part3  ONLINE       0     0     0
          mirror-3                                           ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c4a4201-part3  ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c4a4204-part3  ONLINE       0     0     0
          mirror-4                                           ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c4a36f9-part3  ONLINE       0     0     0
            nvme-eui.000000000000000100a075223c4a4661-part3  ONLINE       0     0     0
        spares
          nvme8n1                                            AVAIL
          nvme9n1                                            AVAIL

Anstelle der NGUIDs werden nun wieder die Disk Labels verwendet. Lässt sich dieses Verhalten via /etc/defaults/zfs (ZPOOL_IMPORT_PATH) anpassen oder weshalb werden ausgerechnet nur die Spares nicht via NGUID adressiert?

Vielen Dank vorab für alle Antworten :)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!