Inconsistent disk-by-id name convention with zfs mirror using P4501 nvme's

SSDPE7KX020T7

New Member
Nov 5, 2025
1
0
1
Hello, I have not seen this before with other pools and am wondering if it is something to be concerned about.
Each drive in the mirror as reported by zpool status seems to follow a different "disk-by-id" naming convention.
The nvme drives have identical model numbers, firmware configurations, firmware versions, and LBA formatting.

Both drives are 2tb Intel P4501/generic/OE branded (Intel firmware, not cisco/oracle)

Preliminary steps:
wiped both drives
updated firmware both drives with sst
formatted both drives to LBA=4096n metadata=0
Installed the drives in proxmox machine

I then used the GUI to make a zfs mirror. The zpool status disk/by-id name of each drive appears to follow a different convention for nvme1n1 compared to nvme0n1.
running ls -l /dev/disk/by-id/ shows both conventional "nvme-INTEL_SSDPEetcetc_PHLFetcetc" names as well as long "nvme-nvme.8086-hugealphanumericstring" versions of the name. The SATA SSD's I've used only show up using the conventional name.


I made sure the pool survives a reboot but the naming inconsistency is making me hesitant to start filling up the pool.

Code:
proxmox-ve: 8.4.0 (running kernel: 6.8.12-16-pve)

Code:
02:00.0 Non-Volatile memory controller [0108]: Intel Corporation NVMe Datacenter SSD [3DNAND, Beta Rock Controller] [8086:0a54]
        Subsystem: Intel Corporation NVMe Datacenter SSD [3DNAND, Beta Rock Controller] [8086:4703]
        Kernel driver in use: nvme
        Kernel modules: nvme
04:00.0 Non-Volatile memory controller [0108]: Intel Corporation NVMe Datacenter SSD [3DNAND, Beta Rock Controller] [8086:0a54]
        Subsystem: Intel Corporation NVMe Datacenter SSD [3DNAND, Beta Rock Controller] [8086:4703]
        Kernel driver in use: nvme
        Kernel modules: nvme

zpool status
Code:
  pool: nvmepool
 state: ONLINE
  scan: scrub repaired 0B in 00:03:16 with 0 errors on Tue Nov  4 11:16:24 2025
config:

        NAME                                                                                                     STATE     READ WRITE CKSUM
        nvmepool                                                                                                  ONLINE       0     0     0
          mirror-0                                                                                               ONLINE       0     0     0
            nvme-nvme.8086-50484c4637353430403533133320805c474e-494e54454c205353445045374b583032305437-00000001  ONLINE       0     0     0
            nvme-INTEL_SSDPE7KX020T7_PHLF7504003J6P0LGN                                                          ONLINE       0     0     0

errors: No known data errors

ls -l /dev/disk/by-id/
Code:
lrwxrwxrwx 1 root root 13 Nov  4 11:19 nvme-INTEL_SSDPE7KX020T7_PHLF7504001D3P0LGN -> ../../nvme1n1
lrwxrwxrwx 1 root root 13 Nov  4 11:19 nvme-INTEL_SSDPE7KX020T7_PHLF7504001D3P0LGN_1 -> ../../nvme1n1
lrwxrwxrwx 1 root root 15 Nov  4 11:19 nvme-INTEL_SSDPE7KX020T7_PHLF7504001D3P0LGN_1-part1 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root 15 Nov  4 11:19 nvme-INTEL_SSDPE7KX020T7_PHLF7504001D3P0LGN_1-part9 -> ../../nvme1n1p9
lrwxrwxrwx 1 root root 15 Nov  4 11:19 nvme-INTEL_SSDPE7KX020T7_PHLF7504001D3P0LGN-part1 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root 15 Nov  4 11:19 nvme-INTEL_SSDPE7KX020T7_PHLF7504001D3P0LGN-part9 -> ../../nvme1n1p9
lrwxrwxrwx 1 root root 13 Nov  4 11:19 nvme-INTEL_SSDPE7KX020T7_PHLF7504003J6P0LGN -> ../../nvme0n1
lrwxrwxrwx 1 root root 13 Nov  4 11:19 nvme-INTEL_SSDPE7KX020T7_PHLF7504003J6P0LGN_1 -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Nov  4 11:19 nvme-INTEL_SSDPE7KX020T7_PHLF7504003J6P0LGN_1-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Nov  4 11:19 nvme-INTEL_SSDPE7KX020T7_PHLF7504003J6P0LGN_1-part9 -> ../../nvme0n1p9
lrwxrwxrwx 1 root root 15 Nov  4 11:19 nvme-INTEL_SSDPE7KX020T7_PHLF7504003J6P0LGN-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Nov  4 11:19 nvme-INTEL_SSDPE7KX020T7_PHLF7504003J6P0LGN-part9 -> ../../nvme0n1p9
lrwxrwxrwx 1 root root 13 Nov  4 11:19 nvme-nvme.8086-50484c4637353430403533133320805c474e-494e54454c205353445045374b583032305437-00000001 -> ../../nvme1n1
lrwxrwxrwx 1 root root 15 Nov  4 11:19 nvme-nvme.8086-50484c4637353430403533133320805c474e-494e54454c205353445045374b583032305437-00000001-part1 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root 15 Nov  4 11:19 nvme-nvme.8086-50484c4637353430403533133320805c474e-494e54454c205353445045374b583032305437-00000001-part9 -> ../../nvme1n1p9
lrwxrwxrwx 1 root root 13 Nov  4 11:19 nvme-nvme.8086-50484c4637353344333035560335377c474e-494e54454c205353445045374b583032305437-00000001 -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Nov  4 11:19 nvme-nvme.8086-50484c4637353344333035560335377c474e-494e54454c205353445045374b583032305437-00000001-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Nov  4 11:19 nvme-nvme.8086-50484c4637353344333035560335377c474e-494e54454c205353445045374b583032305437-00000001-part9 -> ../../nvme0n1p9

Not to hijack my own thread but I also have a Cisco rebranded P4501 that causes the host to lock up and become unresponsive when passed through to a guest regardless of blacklisting and softdep. I always thought this was an issue with that particular drive, but after seeing the mismatch names in zpool status for the Intel generic P4501 I'm wondering if these drives just have compatibility gotchas.
 
Last edited: