Hi all,
I’m working on reorganizing storage on a Proxmox VE 9.1.2 system with multiple HDDs, SSDs, NVMe devices, and a mix of SATA + LSI HBA-connected disks. While doing this I noticed something that has been a recurring challenge:
And in my server I am not even using CEPH so go figure that one...
As a result:
To fully understand how a disk is being used, you must cross-reference all three views, plus sometimes run CLI tools (lsblk, zpool status, zfs list, smartctl, etc.).
This becomes especially confusing when:
A unified or linked view that correlates:
Question:
Is there any plan, proposal, or known community tool that provides a consolidated storage view in PVE 8/9?
Or an intention to bring these elements together in future releases?
Right now the data is there — just scattered in three places without cross-reference. Consolidation would make storage administration much easier, especially for servers with >6 disks or with HBAs.
Greetings from the North
I’m working on reorganizing storage on a Proxmox VE 9.1.2 system with multiple HDDs, SSDs, NVMe devices, and a mix of SATA + LSI HBA-connected disks. While doing this I noticed something that has been a recurring challenge:
Proxmox splits storage information across three separate views:
- Node > Disks
- Shows physical disks, sizes, SMART, partitions
- Does not show ZFS pool membership
- Does not show how the disk is used by Proxmox storage
- Node > ZFS
- Shows pools, vdevs, health, and ZFS metadata
- Does not show corresponding physical disks clearly (especially with HBAs)
- Does not show which Proxmox storages use which datasets/pools
- Datacenter > Storage
- Shows configured storage definitions (directories, ZFS pools, NFS, etc.)
- Does not show the underlying disks
- Does not show ZFS dataset hierarchy or pool structure
And in my server I am not even using CEPH so go figure that one...
Wiki:
┌──────────────────────────┐
│ Datacenter / Storage │
│──────────────────────────│
│ Shows: │
│ • Storage IDs │
│ • Paths / Pools │
│ • Content types │
│ • Enable/Disable flags │
│ │
│ Missing: │
│ • Disk-level mapping │
│ • Pool/vdev structure │
└───────────────▲──────────┘
│
Uses these
│
┌──────────────┴──────────────┐
│ Node / ZFS │
│─────────────────────────────│
│ Shows: │
│ • Pools & vdevs │
│ • Health │
│ • Redundancy layout │
│ • Dataset tree │
│ │
│ Missing: │
│ • Physical disk info │
│ • Controller/HBA details │
│ • Role in storage.cfg │
└───────────────▲─────────────┘
│
Built on top of
│
┌───────────────┴──────────────┐
│ Node / Disks │
│──────────────────────────────│
│ Shows: │
│ • Physical drives │
│ • Serial/model │
│ • SMART info │
│ • /dev/sdX assignments │
│ │
│ Missing: │
│ • ZFS pool membership │
│ • Storage role │
│ • Dataset relationships │
└──────────────────────────────┘
Summary:
- *Node / Disks* shows hardware, but not ZFS.
- *Node / ZFS* shows pools/datasets, but not disks' physical details.
- *Datacenter / Storage* shows PVE storage definitions, but not pools or hardware.
All three views contain parts of the puzzle, but no single place links:
DISK > VDEV > POOL > DATASET > STORAGE > USAGE
As a result:
To fully understand how a disk is being used, you must cross-reference all three views, plus sometimes run CLI tools (lsblk, zpool status, zfs list, smartctl, etc.).
This becomes especially confusing when:
- multiple disks are connected to HBAs
- drives are added/removed (sdX ordering changes)
- several ZFS pools exist (system pool, VM pools, backup pools, scratch, etc.)
- datasets map inconsistently to storage.cfg definitions
- some disks appear “unused” until cross-checked with CLI output
A unified or linked view that correlates:
- Physical disk > by-id > pool/vdev > dataset > mountpoint > storage.cfg entry > role in Proxmox
Question:
Is there any plan, proposal, or known community tool that provides a consolidated storage view in PVE 8/9?
Or an intention to bring these elements together in future releases?
Right now the data is there — just scattered in three places without cross-reference. Consolidation would make storage administration much easier, especially for servers with >6 disks or with HBAs.
Greetings from the North