On a NUC, expansion flexibility tends to be more on the outside of the box for lack of space inside.
I've started to experimemt with the latest generation of 10Gbit USB drives like the Kingston Data Traveller to regain some of the flexibility I enjoyed, when most of my storage was on carrier-less SATA SSDs in hot-swap enclosures. It's nowhere near NVMe bandwidths, but twice ordinary SATA at around 1GByte/s sequential and being full SATA controller it should implement the full scope of wear levelling and defect management you'd need to run an OS reliably. So I hope it a new class of USB storage, that overcomes the typical limits and hazards of USB storage of the past. After all it's more expensive than the same capacity in NVMe or SATA!
Also, since Proxmox likes to eat disks whole, I've then used the smallest size 256GB sticks to boot Proxmox and then allocated internal NVMe, SATA and external whatever to Proxmox VM or container storage and backup. The Thunderbolt port connects a 10Gbase-T network for business, whilst the on-board 1/2.5Gbit ports can do corosync, still far from the ideal functional network separation, but this is home-lab, not enterprise.
Having USB boot means you could pass-trough the entire onboard storage to a VM, but without passing trough a dedicated storage network, too, I just see bottlenecks being pushed between one corner or another and I feel safe to say that in most cases the network is the far bigger (or should I say smaller?) bottleneck than storage these days.
Except for spinning rust, but that shouldn't require pass-through to obtain maximum bandwidth either, nor incur significant latency or CPU usage penalties with modern non-emulation hypevisor drivers.
I haven't done tons of benchmarking with Proxmox yet, but so far VM storage with CEPH has always been rather close to network speeds (and the 3x write amplification penalty on three-way full-replica writes), at least with NVMe storage and 10Gbit Ethernet, even without pass-through, but using non-emulation drivers.