Hi Achim,
I assume that you have proceeded since your post 2 weeks ago. Here is some gerneral Advice;
ZFS is recommended with ECC RAM and typically requires 1GB RAM per 1TB Storage. You have a Xeon so that is a possibility but not a Given.
VM's on ProxMox work best when there is High performace Storage involved for VMs. i.e. a Single Spinning disk may not be suitiable for more than a couple VMs depending on the work load. I assuming the SSD will be used for this.
You have indicated you have NVMe SSDs in Mirrored RAID, do they have power loss protection? Consumer Grade SSD's don't Enterprise grade SSD's can, these type are suitiabe for RAID as they insure that the cache is written before power is lost, otherwise you end up with data corruption.
BTRFS and ZFS provide BitRot protection and RAID.
SnapRAID is not real time Protection, this may be suitiable for Media that isn't going to change often. Not generally the focus of Proxmox Forum.
Proxmox has no GUI support for BTRFS
Proxmox has no GUI support for SnapRAID
Proxmox has no GUI support for File sharing
Depends on how much you want to manage via the command line.
With 8TB HDD, Depending on the Brand Model, they Could have a read performance between 80MB/s and 180MB/s and Write Performance of 40MB/s to 180MB/s. (i.e. Segate Archive Drive vs Enterprise NAS Drive) As they are Spinning Disks, Fragmentation over time will also decreese their performence. Now with Streaming Media, I'm assuming Videos, the performance you need depends on the Encoded Bit Rate of the media, number of users and minimally double that to account for fragmentation or fast forwarding, jumping in time of the video. Example 2K Video at 10Mb/s x 4 users = 40Mb/s, double that = 80Mb/s = about 8 MB/s if only streaming videos. If you are also adding content at the same time and performing SNAPRAID snapshots, there could be performance issues (video pauses when the cahce runs out) if the drives are too slow.
Basically Beyond 2 or 4 TB HDD's using RAID other than Mirror for Data protection (redundancy) is not recommended. Basically the story is that if Drives fail after an amount of time of use and you build your RAID set from a new set of drives, they may fail at a similar point in time, days or months of each other. The time it takes to rebuild a RAID5/6 configuration, there is a higher chance of another drive failure during the re-build. I think this more to do with performance of the RAID5/6 cacluations of an operational RAID then just drive failures. i.e. a Mirror RAID or Mirror Stripe RAID will perform as fast as the Disk can perform but RAID5/6 is limited to Processor speed of the controller when there is avalibility for processing.
I have not used ZFS, so I don't know it's limitaitons but from what I have used of BTRFS in Mirror Stripe Mode, it is flexiable enough to add and remove drives to an operational array. I have not tested how it operates with failed drives but assuming you have capacity, you can re-balance before adding a new drive. (i.e. 6 Drive array, drop 1 drive you can re-configure to a 5 drive array before you add the 6th drive again) This may be desireable should you not have the hardware available for some time. BTRFS is still a work in Progress and from what I have read this is mostly to do with RAID5/6 configurations and is regarding memory loss due to power loss.
If you want to use BTRFS, what you can do with ProxMox is to Pass the Physical Drives through to a Guest VM and the Guest VM can handle the drives directly. However Most recommend to pass the Drive controller card to gain the full SMART monitoring capability. Then you can use a Guest VM for NAS like OMV or Rockstor. Rockstor has BTRFS intergrated in the GUI, OMV supports BTRFS but it is not intergrated in the GUI.
I'm trying Proxmox with Rockstor Guest 5 drive pass through BTRFS and OMV 32bit baremetal 2xSata Mirror 2xUSB plan to be mirror internal using with BTRFS.
Anyone feel free to correct me if I've mixed up anything regarding Storage, Proxmox, etc.