My homelab friend, this sort of behavior is relatively common out of any sort of system that transfers large files over the network.
There will be multiple levels of cache along the way, some of which you control, and others you may never become aware of.
You are largely confronting network speed and disk speed. In both of those areas you will encounter cache that makes it look like quite a bit more has happened than has really taken place.
Just for a point of reference, I recently made some network changes on a Veeam/VMware system, and was gloating over the incredible speeds of my first tests ... till I started testing with VMs over about 50gb, and then I saw the same old crawl I was used to.
I don't mean to discount the things folks have told you here. They are dead-on.
If you want your backups to perform well, you will reconsider your NAS mount.
There's more at play here than just the NAS being slow. PBS performs really poorly on an NFS/SMB mount.
The subject of PBS and a NAS comes up a lot. Do some reading.
Um ... this is something i wrote about PBS and NAS. Maybe start here.
Ok, this is going to be not-so-obvious, but give it a chance.
- The more common method of doing this is to mount the NFS share in the PBS filesystem and then setup a PBS datastore there. If you do this, you'll have to make some file system permission changes, and it will not perform well.
- The less common method, which I actually do employ, is to use the NFS as a Proxmox datastore and host a qcows virtual disk there. Attach that qcows disk to the PBS VM and build your PBS datastore there. This option performs MUCH BETTER than the above method. It still sucks.