With any SCSI type solution (anything-over-fabric), block level snapshots, thin provisioning and overcommitment have to happen with the storage aware of which blocks are being used/changed. That means things like TRIM have to go from guest all the way to storage, I’ve rarely seen a setup where that is actively used and properly setup.
Proprietary storage/RAID typically doesn’t support this without “extra payment” or “custom licenses” on the most expensive tiers, there simply is too much overhead for a simple Pentium3-performance ARM chips w/ 512 or 1G of RAM you typically see in these controllers, the more expensive systems (eg Dell Power*, HPE Cray) are basically servers running OpenBSD.
As people mentioned above, the Linux kernel in Proxmox may support this at other levels or your guest may support this (eg LVM or ZFS), but yes, you’re adding another layer. Note that whenever you see a howto, you can replace iSCSI with FC or SAS or Ultra320 or even NVMoF, it’s all SCSI. You may be able to write a plug-in for your specific storage solution, but once you get it to work, most people just go with that and ignore the GUI stuff.
Hopefully that makes sense, it makes sense in my head. I’ve only recently seen vendors support virtualized storage with BlueField DPU, as you can imagine, that’s not cheap. I have some inherited storage like that for test/dev, we are simply passing LUNs to each individual host, pretending they are local disks and running Ceph across it, ignore the proprietary RAID capabilities since that’s bitten me in the past.