Adding NFS Share as Datastore in Proxmox Backup Server

plastilin

Renowned Member
Oct 9, 2012
109
5
83
Ukraine
I'm currently using a OviOS Linux as my main NFS storage and PBS as virtual machine. I'd like to continue to use network storage as my backup target. How would add my NFS share to the Proxmox Backup Server as storage?
 
hi,

you can simply mount your NFS share (with mount command or by adding to /etc/fstab). afterwards create a datastore on the mountpoint (via GUI, or with proxmox-backup-manager datastore create mystore /path/to/my/nfs)
 
  • Like
Reactions: Nanobug
hi,

you can simply mount your NFS share (with mount command or by adding to /etc/fstab). afterwards create a datastore on the mountpoint (via GUI, or with proxmox-backup-manager datastore create mystore /path/to/my/nfs)
that worked fine for me.
But why does proxmox (backup server) backups free (unused) space from disks?
 
that worked fine for me.
But why does proxmox (backup server) backups free (unused) space from disks?
Why are you thinking, that PBS backups «free» space?
Use the function «mail» in the settings to send a report to yourself, there you find useful infos about the backup process itself. Like how long it did run and also about which disk/dirty bitmaps did get backup up. The status page of the vm shows alway the whole size of the VM itself, but only changed blocks were transferred to your nfs-share.
 
For me, it is not working because the directory and share are made by user=root while the datastore creation is made by user "backup"

Code:
root@pbs:/mnt# proxmox-backup-manager datastore create store_core /mnt/store_core
TASK ERROR: EPERM: Operation not permitted
Error: task failed (status EPERM: Operation not permitted)

And NFS does not allow to mouns using uid and gid

So I don't know how to proceed
 
The backup user needs write access. Did you try to chown the folder on your NAS to the UID of the backup user and/or map the NFS share to the backup users UID?
 
Last edited:
hi,

you can simply mount your NFS share (with mount command or by adding to /etc/fstab). afterwards create a datastore on the mountpoint (via GUI, or with proxmox-backup-manager datastore create mystore /path/to/my/nfs)
Just wanted to let you know this worked.
I used SMB in my setup, but this worked otherwise. Thank you :)
 
Good to know you got it working. I wrote a step by step on setting up NFS for PBS backups (I used a NFS v4 share on my Synology NAS) that I could share. I'm sure it can be applied for any NFS share...
 
People work so hard on making this bad idea function.
And it is a bad idea, from the start. Known quantity. It will work poorly at very best.

For homelab or technology demos, learning stuff, poking around, I get it.
I like to string together silly components that shouldn't be used. It's fun to watch the systems struggle. Groovy.

But if you are doing this for work, reconsider your path.
Do not store your .chunks folder directly on NFS or SMB mounts in the PBS filesystem.
 
Last edited:
Please provide the officially supported way to store the .chunks file on shared storage then. Companies have shared storage, included dedicated ones for backup. Where is the documentation on the supported / unsupported methods since NFS/SMB direct access isn't supported (according to?)
 
I've seen Promox staff provide the directions to mount and use NFS has a PBS datastore. I think its supported? Maybe? That's unclear to me.

But is it a good idea? No. It is not. I've seen some extensive testing and done plenty of my own.
I can state that NFS and SMB mounts in the PBS filesystem are an exponentially worse way to store your .chunks than anything else.

iSCSI tested pretty well, and may be an option in many cases.

If you actually want to know about this, I can dig up @Der Harry 's thread.
 
The documentations recommended hardware list enterprise SSDs, so no, using NFS or CIFS is not really supported:
https://pbs.proxmox.com/docs/installation.html#recommended-server-system-requirements
Since the system is basically a debian one can of course use the usual way to mount shared network storage or iSCSI.
Whether it's a good idea is a different thing though, I agress with tcabernoch on this: PBS stores its data in a lot of small files which will be needed to be read when doing the verify and garbage collection jobs. A network storage will always be slower than a local storage. If you must, iSCSI will propably work better than NFS/CIFS though.
It might be an alternative to add a shared storage via FC but I can't recall any discussions on it.
 
  • Like
Reactions: carles89
I've added an nfs share as sync target (so no direct backup). It's running well so far. Garbage Collection is slow of course.

Just mount the share via fstab und go with it...
 
I've seen Promox staff provide the directions to mount and use NFS has a PBS datastore. I think its supported? Maybe? That's unclear to me.

But is it a good idea? No. It is not. I've seen some extensive testing and done plenty of my own.
I can state that NFS and SMB mounts in the PBS filesystem are an exponentially worse way to store your .chunks than anything else.

iSCSI tested pretty well, and may be an option in many cases.

If you actually want to know about this, I can dig up @Der Harry 's thread.

Saying that it's a bad idea without giving actual evidence on WHY it's a bad idea is not really helpful.

I'm using this setup as well (on my homelab) and it works reliably so far, except that it is terribly slow. Getting around 30 MiB/s writes, on drives that are capable of ~220 (although limited by 1 Gbit/s networking)
 
Last edited:
Saying that it's a bad idea without giving actual evidence on WHY it's a bad idea is not really helpful.

I'm using this setup as well (on my homelab) and it works reliably so far, except that it is terribly slow. Getting around 30 MiB/s writes, on drives that are capable of ~220 (although limited by 1 Gbit/s networking)
PBS gets data through https in very small chunks. https or pbs uses for such writes synchronously operations, means every file or chunk must be written, before the next gets processed. Due to the latency and overhead of nfs its terrible slow as you mention.
 
PBS gets data through https in very small chunks. https or pbs uses for such writes synchronously operations, means every file or chunk must be written, before the next gets processed. Due to the latency and overhead of nfs its terrible slow as you mention.
Which was also backed up by a simulation of PBS modus operandi for a lot of different datastore options:
https://forum.proxmox.com/threads/datastore-performance-tester-for-pbs.148694/

Basically it turned out that even if the network shares are on the same host for the test (so you know that any performance differences are due the used kind of datastore (nfs, cifs/smba / iscsi/ ext4 /zfs etc) and not the latency/performance penallty of the network) performance with network shares will be way worse than with local storage connected as native filesystem (aka ext4/zfs etc).

One should note that (in another thread) there was quite a lively debate between the author of this benchmark and the PBS developers on the validity of some of his assumptions:
https://forum.proxmox.com/threads/developer-question-about-chunk-folder.148167


Nontheless they agreed with his findings, that using a network storage isn't good for performance and should be avoided. That's exactly the reason why PBS recommends using enterprise SSDs for PBS datastores. One compromise might be to setup a ZFS mirrorwith relative slow HDDs and combine them with a special device mirror on fast ssds. This will still be slower than a pure SSD datastore but will speed up stuff like garbage collection a lot.
Saying that it's a bad idea without giving actual evidence on WHY it's a bad idea is not really helpful.

The reasons above are not really new but were discussed a lot in this forum more than once.
For these reasons I don't think it's helpful to tell people how to use NFS or CIFS as a PBS datastore. Instead people should be told not to do that and the reason why. This would also have the benefit that future users who look up the forum for threads on this topic will know about this stuff without having to ask the same question again.
 
  • Like
Reactions: carles89
Another reason why NFS isn't a good option for a datastore is the fact that NFS is insecure by design.
Like most still used protocols (ftp) from the early days of networking/Internet it was developed with assumptions for the trustworthyness of the users and administrators of a network which are not really the case anymore. For example any user with root permissions on his machine (so anybody who controls his own notebook) can access any file on any nfs share since NFS security is based on the user id and the client IP, there is no other authentification (expect Kerberos with NFSv4 which isn't active by default and quite a hassle to setup). So anybody with root rights on his machine can just create a local user with the right uid and will have access to the files on the nfs share. I saw a great talk on it last year by a professional penetration tester:
https://talks.mrmcd.net/2024/talk/7RR38D/

It's in German but his slides are thankfully available in English so the main points should still be clear even If you don't understand German.

For these reasons I wouldn't consider NFS as permanent storage for any important backups but would always have a second immutable (so safe from ransomware attacks or hackers) copy on an offsite location.
A local dedicated PBS with his storage on a local disk not connected to the network on the other side can be configured that the PVE nodes can only write new backups but not edit or remove them. So this (together with a remote PBS on an offsite location like a vserver or something like tuxis.nl or inetts PBS cloud services) would be my preferred way to backup with PBS