There is a reason, why this is not in the GUI: It's not recommended by the developers, see
https://pbs.proxmox.com/docs/installation.html#recommended-server-system-requirements
They recommend local storage, preffered Enterprise SSDs or (as a compromise) a zfS pool with HDD mirrors for bulk data and an enterprise ssd mirror as special device for small files and metadata.
The reason is that PBS split it's data in a lot of small files ("chunks") to do it's deduplication magic: Data is saved only one time, if another backup is done and the chunk is already present the backup won't need to take more space, instead references to the already saved data. This benefit comes with a price though: For most tasks every chunk needs to be read (e.G. for garbage collection, verify etc) which won't perform well on a HDD and even worse on a network share like NFS or CIFS.
Some guy tried to recreate the PBS setup of a lot of small files with a benchmark script. Although the Proxmox developers had quite some good points how some of his assumptions are actually not true they agreed with his main result: Network shares don't work well with PBS, see his results here:
Since his tests were done on a localhost (so a network share on the same machine) the bad results can't be explained with network issues it's just that NFS or CIFS are really not suited for PBS.
The reason videos for such subpar setups exist (my "favourite" even explain how to do this over Internet to some cloud storage folder like Hetzners storagebox) isn't because it's a good idea (it isn't) but because v-loggers need clicks and ads revenue.
Now since PBS is basically a Debian you can still use network shares via the cli and shell, but instead of doing this I would try whether you might setup a VM with PBS in your qnap or Truenas. As far I know both systems have support for running VMs, so if your hardware has enough power this is the way to go.
I remember some guy here in the German forum, who first tried with CIFS or nfs, than switched to a VM on his nas. Allthough his datastore was on a hdd mirror (so not a recommended setup!) the results were quite impressive:
Naja, wenn die auf HDDs liegen, wird das vor allen daran liegen. Hat man ZFS, lässt sich das mit einen sogenannten special device beschleunigen (da die Metadaten dann dort gespeichert werden und Garbage Collection vor allen mit den Metadaten arbeitet), da muss man aber beachten, dass man mindestens zwei SSDs im Mirror hat, da bei einen Ausfall des special devices auch alle Daten auf den HDDs weg sind
Dafür habe ich bei der Synology den oben beschriebenen BTRFS-Metadaten-Cache auf zwei 1TB-NVMe-SSD (als RAID-1) im Gerät. Das ist quasi das "Special Device" in dem Fall. Bei Verwendung...
The verify via CIFS took 5 h, 30 min, the garbage collection job around 8-9 minutes. As a vm on the NAS verify took 3h, 40 min, garbage collection around one to two minutes.
In my book these numbers speak for themselves, so I wouldn't bother with network shares, it's really not worth the trouble.