Prune will only delete some KBs/MBs of index files. The GBs/TBs of data are the chunks and those will only be deleted by a GC and there need to be 24 hours + 5 minutes or more between the prune and the GC. And with a 100% full datastore you probably won't be able to run a GC as lock files won't be able to be created and so on.
First I would disable all backup jobs so now new backups will fill the datastore again.
Then there are several options:
A.) If you care about your backups, buy more disks and expand your storage in case you use something like ZFS as the backing storage
B.) alternatively move the whole datastore folder to a bigger storage (for example a NAS), prune + GC it and move it back
C.) if you don't care about your backups, delete some index files to get a few KBs/MBs of free space (those backups will be lost) and hope that this is enough to run a GC. In case PBS and the datastore share the same filesystem, you could try first to delete some unimportant files like logs.
D.) if you don't need any of those backup, delete the datastore and create a new empty one
And for the future should set up proper monitoring (for example Zabbix with the PBS template) and use quotas, so this won't be able to happen again.