[SOLVED] PBS with datastore on NFS

Fug1

Active Member
Mar 27, 2019
21
4
43
40
I know there have been a lot of posts about this, but I'm still struggling.

I have a new PBS instance started up running 3.2-2. I have a FreeNAS/TrueNAS server with an NFS share set up for my backups.

I have mounted the NFS share using /etc/fstab under /mnt/backups in PBS. I have verified that I can write to this folder on PBS with both root and backup users. To test writing as the backup user, I added a shell for the backup user under /etc/passwd and switched to backup user with 'su - backup'.

Still, when I attempt to add a datastore under PBS on the NFS mounted directory, I get "EPERM: Operation not permitted". I guess this must be a permissions issue, but I can write with the backup user. Are there any logs that can provide more detail about this error?

Thank you.
 
Hi,

I'm glad to hear that you solved the issue yourself!

I will go ahead and set your thread as [SOLVED] in order to help other people who have the same issue.
 
OP is going way with a bad solution. How is that "Solved"?

@Fug1 Have you seen the testing to indicate that this is the very worst way to mount storage for PBS?

This thread is a bit contentious.
https://forum.proxmox.com/threads/datastore-performance-tester-for-pbs.148694/

The guy that wrote the script is trying to make the point that what you are doing is wrong.
There's some messy discussion there, but I think they proved their point.
Anything other than an NFS mount.
If you have to use that storage and that server, connect them with ISCSi instead.

PS ... Maybe it's time for @Der Harry to have another say about this.
 
Last edited:
OP is going way with a bad solution. How is that "Solved"?

@Fug1 Have you seen the testing to indicate that this is the very worst way to mount storage for PBS?

This thread is a bit contentious.
https://forum.proxmox.com/threads/datastore-performance-tester-for-pbs.148694/

The guy that wrote the script is trying to make the point that what you are doing is wrong.
There's some messy discussion there, but I think they proved their point.
Anything other than an NFS mount.
If you have to use that storage and that server, connect them with ISCSi instead.

PS ... Maybe it's time for @Der Harry to have another say about this.
"Solved" means the question was answered to OP's satisfaction. The NFS-based backup was not working, and now it is. (Assuming backups are completing and verifying without errors.) :)

Efficiency and efficacy of a particular storage solution is a separate topic to actually getting that storage solution working, and it's up to each of us to decide what level of performance is good enough for our use cases. Using a slower/less efficient type of network connection is not "wrong" unless it causes errors or compromises the backup somehow. It's just the most ideal option.

There are certainly NASes out there that support NFS but not iSCSI, for example. Having more options well-documented means more people have a better chance of using the hardware they actually have.

For me, I'm looking for a secondary local backup, syncing from a local backup on the PBS server that lives on a SATA SSD mirror (ZFS). NFS would probably be good enough for that, as I'd want it to run overnight around 4 AM. I'm fortunate enough to have iSCSI, so I'll probably use that. But it's nice to know I have options.
 
Last edited:
  • Like
Reactions: Johannes S
Yes, it's working perfectly for my application. I haven't observed any performance issues.
Awesome. :) Thanks for letting us know.

I'm probably going to try setting up an iSCSI link to a four-way HDD mirror pool soon. It'll be interesting to see how that does. Raw write speed and. network speed are fine, but I'm sure the I/O on spinning rust will negate most of that performance. It'll be interesting to see how much the ZFS ARC mitigates that. My PBS server has 64 GB of RAM, and I've honestly never seen it pushed to use more than half of that. The TrueNAS server is getting an upgrade to 64 GB soon, as well.
 
OP is going way with a bad solution. How is that "Solved"?

@Fug1 Have you seen the testing to indicate that this is the very worst way to mount storage for PBS?

This thread is a bit contentious.
https://forum.proxmox.com/threads/datastore-performance-tester-for-pbs.148694/

The guy that wrote the script is trying to make the point that what you are doing is wrong.
There's some messy discussion there, but I think they proved their point.
Anything other than an NFS mount.
If you have to use that storage and that server, connect them with ISCSi instead.

PS ... Maybe it's time for @Der Harry to have another say about this.
Is it because it’s just bad practice do use NFS with PBS? Is SMB better? I am trying set up NFS as well, but with a Synology NAS.
 
  • Like
Reactions: tcabernoch
Hi @lapas . Excellent question.

PBS does a lot of communication with the storage, far more so than most backup back ends. It virtualizes all the data, dedupes it. There's only one single copy of any chunk written. A lot of the disk work is actually scanning the metadata to decide if it wants to even write a particular chunk. Because of this constant back and forth data demand, the typical 'slow storage for backups' configuration does poorly for PBS.

So, that much I understand. And then we get into some stuff that I only sorta understand.

SMB and NFS are about equally inefficient if you mount one of these directories to your PBS and then designate it as a datastore.
Actually, 'inefficient' barely describes it. They are worse than other options by 1 or 2 orders of magnitude. Like, actually bad.

And then paradoxically, a virtual qemu disk mounted from an NFS datastore and then a PBS datastore built there, does much, much better.
I don't entirely understand why.
Obviously, it's being used much more effectively in the case of PVE to support a virtual disk, than it is the Debian OS mounting a directory on the NAS.
But I do believe it because its been tested, and I've run the tests. See that link I posted for more info.

I use NFS storage for my (secondary) PBS systems.
Its a 6TB qemu qcows virtual disk on the NAS.
Works alright. Not great. But better than the direct mount in the OS.

I believe DerHarry's team indicated that iSCSI is the way to go if that's an option. (Not my thing, haven't tried it.)
 
Last edited: