[SOLVED] [Unsupported] NFS doesn't work on PBS

Nov 16, 2022
95
9
8
Sweden
EDIT:
A lot of rant...

PBS markets as an enterprise product, but are limited to only use SSDs (the only setup "supported") and to setup something else you (like a remote NFS share for example) isn't as straight forward as you would expect from an "enterprise" product. I come from Veeam and I've used NFS through 10 GBe to backup my ~30 linux VMs for the past 7 years without any issues. Now I still stuggle to get it to do what I want - 1 week after deployment of PBS (read below).

Not making it possible to mount NFS or CIFS locally on the PBS server and then use that as a datastorage is like buying a car without wheels. Yes, permissions are correct, it mounts, I can back up, but it took me forever to get there, and it's still having issues with just dying. PVE works without a hitch, but PBS - nope!

Basically, there are a lot of room for improvement.
 
Last edited:
You can't really complain that a feature isn't working that isn't officially supported. You shouldn`t even use HDDs as they are lacking the required IOPS performance. It's designed for local SSD storage.

And here it is working totally fine using NFS since PBS 1.0 till now with PBS 2.3.
 
Last edited:
  • Like
Reactions: ITT and Neobin
Thanks @Dunuin, I've read your posts. I've tried them, I've succeeded as well, but once mounted I get constant timeouts and need to reboot the server.

I know, the supported setup is plain SSD. But that's making it easy on yourself, and to me not a production ready system. It works, but "only with this specific setup". It's like VMware (Veeam) saying, you must run this on SSDs for it to work... Naah, don't think so.

Maybe I'm spolied with Veeam working with the exact same setup for 7 years, and now this. I'm not a Linux novice, and I've tried everything it seems. Even the forbidden `777` like you suggested.

Sorry, ranting here, but it seems impossible. I don't care if it's slow, I need incremental backups for my 21 TB data.
 
Last edited:
OK, so I FINALLY got it working again by adding a new user and group to QNAP named backup, with UID and GUID 34.

[~] # addgroup -g 34 backup [~] # adduser -u 34 -G backup backup [~] # Changing password for backup [~] # New password: [~] # Retype password: [~] # Password for backup changed by admin [~] # cat /etc/group administrators:x:0:admin everyone:x:100:admin,enoch guest:x:65534:guest backup:x:34:backup [~] # cat /etc/passwd ..... backup:x:34:34:Linux User,,,:/share/homes/backup:/bin/sh


Also removed specific users in QNAP GUI as and added no_root_squash instead of specific users.

1669853701923.png
On PBS I added this in my /etc/fstab:


# NFS 192.168.1.115:/PBS /mnt/qnap nfs defaults,mountproto=tcp,nfsvers=3 0 0


192.168.1.21 = PBS
|
10 GBe
|
192.168.1.115 = QNAP

One issue remains, and it's the same as last time... Adding the storage in Proxmox works, but then it can't connect. So I'm kind of back to square one again, because this was the issue I was trying to solve by reinstalling.

Any help is appreciated. Just added another disk in the QNAP to cope with the huge backups from just running it from Proxmox.
1669853834848.png
 
Now it's been a week with reinstalling, adding, removing, try different workarounds, with the same end result:

1. I can mount NFS fine in the CLI via `/etc/fstab`
2a: Somehow I managed to get NFS to mount as a datastore, but it seems like the connection is lost when `systemctl` reloads, which hangs the mount and a `systemctl-daemon reload` is needed. This causes constant timeouts, and only way to solve it is to reboot the whole PBS server. When it happens, even SSH hangs.
2b: This lead me to reinstall the whole thing, and this time - adding an NFS datastore, no way!
3. There are no logs for debugging. I'm left in the dark.
4. A lot of bad words.

How can you call it production ready, without the possibility to add remote storage? I mean it's Debian based - just show the local mounts in the backend, and allow us to use them as datastorage! OK, it's not fast, but it's way better than running everything on the same server if you're on a budget. It's also commonly used in different enterprises.
Thanks @Dunuin, I've read your posts. I've tried them, I've succeeded as well, but once mounted I get constant timeouts and need to reboot the server.

I know, the supported setup is plain SSD. But that's making it easy on yourself, and to me not a production ready system. It works, but "only with this specific setup". It's like VMware (Veeam) saying, you must run this on SSDs for it to work... Naah, don't think so.

Maybe I'm spolied with Veeam working with the exact same setup for 7 years, and now this. I'm not a Linux novice, and I've tried everything it seems. Even the forbidden `777` like you suggested.

Sorry, ranting here, but it seems impossible. I don't care if it's slow, I need incremental backups for my 21 TB data.

A: The axle on my VW Polo broke while driving off-road. How can you call it production ready?! :mad:
B: The VW Polo is and was never designed for driving off-road. Why did you not use an appropriate off-road vehicle? :oops:
A: I do not care what my VW Polo is designed for and for what not! I am on a budget, so he has to go through it! :mad:
B: ... :rolleyes:
 
  • Like
Reactions: tom and birdy
One issue remains, and it's the same as last time... Adding the storage in Proxmox works, but then it can't connect. So I'm kind of back to square one again, because this was the issue I was trying to solve by reinstalling.
And the backups, including the hidden chunks, are still owned by the backup user? Or did you just changed the NFS share owner?
 
  • Like
Reactions: ITT
A: The axle on my VW Polo broke while driving off-road. How can you call it production ready?! :mad:
B: The VW Polo is and was never designed for driving off-road. Why did you not use an appropriate off-road vehicle? :oops:
A: I do not care what my VW Polo is designed for and for what not! I am on a budget, so he has to go through it! :mad:
B: ... :rolleyes:
Spot on. Sorry for ranting, but yesterday I was writing in anger.
 
And the backups, including the hidden chunks, are still owned by the backup user? Or did you just changed the NFS share owner?

It worked when I rebooted, but then died again after backing up the largest VM I have (7 TB).

Once the permissions were set, I added the datastore with the proxmox-backup-manager datastore create qnap /mnt/qnap and the chunks were created by the PBS server on the NFS share. I then checked the permissions on the NFS share and they were indeed backup:backup. But I'm not sure if that's the NFS permissions or the PBS permissions as I changed permissions on the NFS side (QNAP) to have the same GUID and UID (34:34) as the user which added the chunks on a normal disk (backup:backup 34:34) when I tested.

When I come home I will investigate why the systemd-daemon has to be reloaded - beacuse I came to the conclusion that every time I lost connection and check systedctl status nfs-qnap.mount (or something like that) it's loaded and active, but I also see a warning that systemd-daemon were changed and has to be reloaded. After reloading, the PBS datastore comes back online in Proxmox. Though sometimes it doesn't and a reboot of the PBS server is required.

Any suggestion which logs to check? I've quickly had a look at dmesg and syslog but not so much to grasp on in those. It's also a bit hard because I don't really know what to look for.
 
Last edited:
How can you call it production ready, without the possibility to add remote storage? I mean it's Debian based - just show the local mounts in the backend, and allow us to use them as datastorage! OK, it's not fast, but it's way better than running everything on the same server if you're on a budget. It's also commonly used in different enterprises.
Maybe YOU lacks on knowledge, sorry for that.
It´s a simple permission problem.
 
Last edited:
Fighting doesn't really help.
PBS can work perfectly fine with NFS, as it does here, even if that is not recommended nor a supported feature. So would be better to spend time finding the problem (whether it is rights, NFS server, network problem, hardware problems, PBS/Debian problem or whatever) instead of finger-pointing.
 
Last edited:
  • Like
Reactions: enoch85
Agreed, and that's what I'm trying to do here. Running a backup of that large VM now, with tail -f of syslog.

Also did a deeper investigation of dmesg and syslog, but found nothing useful. I wish I had more logs to check, but I'm afraid it's not much to go on in this minimized system (running as a VM from the ISO (2.3).
 
OK so something useful - this happens after like 100 GB copied for the large VM:ERROR: Backup of VM 130 failed - backup write data failed: command error: write_data upload error: pipelined request failed: connection reset

What does that translate to?
 
Fighting doesn't really help.
PBS can work perfectly fine with NFS, as it does here, even if that is not recommended nor a supported feature. So would be better to spend time finding the problem (whether it is rights, NFS server, network problem, hardware problems, PBS/Debian problem or whatever) instead of finger-pointing.
Hey, so I have been trying same thing for last two days. I created user and group in truenas with "pbs" with id for both being 1004. I did same in PBS as well. I still can't create datastore. I am not too expert but using mkdir command, I was able to create folder with in NFS share.

What else is needed?
 
@ZooKeeper PBS uses the user backup for the NFS share (as read above) and you need to create a user in TrueNAS with the same GUID and UID for it to work.

So create a user in TrueNAS with UID 34 and GUID 34, and name both the group and user backup. Then mount it, and run proxmox-backup-manager datastore create truenasnfs /path/to/nfs/ahsre from PBS. Now it should start creating chunks.
 
Last edited:
@ZooKeeper PBS uses the user backup for the NFS share (as read above) and you need to create a user in TrueNAS with the same GUID and UID for it to work.

So create a user in TrueNAS with UID 34 and GUID 34, and name both the group and user backup. Then mount it, and run proxmox-backup-client datastore create truenasnfs /path/to/nfs/ahsre from PBS. Now it should start creating chunks.
My server is shutdown for maintaince right now (TrueNAS Core server serving the NFS share to a PBS VM on the same server), so I can't have a look how I did it, but I remember I had to use nfsv3 because nfsv4 wasn't working and I think I changed the rights of the dataset and NFS share to 777 so everyone is allowed to read and write, no matter what user/group is accessing it. NFS doesn't got any user authentification anyway. So I limited access to the NFS share by whitelisting the IP of my PBS. And I think I had to change the maproot or mapall fields.
Also keep in mind the ACLs of your dataset.
 
Last edited:
@ZooKeeper PBS uses the user backup for the NFS share (as read above) and you need to create a user in TrueNAS with the same GUID and UID for it to work.

So create a user in TrueNAS with UID 34 and GUID 34, and name both the group and user backup. Then mount it, and run proxmox-backup-manager datastore create truenasnfs /path/to/nfs/ahsre from PBS. Now it should start creating chunks.
This is error I get and then list of command I can use. I don't see datastore and command.

Code:
Error: no such command 'datastore'
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!