[TUTORIAL] PBS on TrueNAS - Have your cake and eat it too

PwrBank

Member
Nov 12, 2024
87
29
18
What?
Are you wanting to use PBS to backup your servers, but want to use TrueNAS as the storage handler?
I've been using this setup for the last few weeks and it has worked perfectly.

With the latest update of TrueNAS Community Edition, 25.04, you can run Linux Containers (LXC) natively on the system.

Why?
This will allow you to have the ZFS back end of TrueNAS handle snapshots, replication, etc. for a robust storage layer. TrueNAS is a little more user friendly when it comes to managing the actual devices involved in the storage. You can easily expand the storage, divide it up for other uses, and run other services if you wanted to.

How?

⚠️ You will need TrueNAS 25.04 (or newer) in order to get LXC support.

Setting up the Debian container
We will be using a Debian LXC as the base for PBS.

Navigate to the Instance section within TrueNAS
1740574050612.png
And create a new instance with a Debian Bookworm image
1740574097439.png
(By default it will allow all RAM and CPU to be shared with the container, set this according to your needs)

Add a disk, create a new dataset for us with the PBS container
For the destination, I chose /mnt/pbs
1740574230217.png
If you'd like to use the same IP address as the TrueNAS system, add a proxy setting. I assigned the default ports used by PBS with HTTPS.
1740574300406.png

Press Create


(THIS PART COULD USE FEEDBACK FROM THE COMMUNITY, I'M NOT GOOD AT LINUX PERMISSIONS YET, THIS JUST WORKED FOR ME AND MAY NOT BE THE MOST SECURE SETTINGS)
Navigate to the dataset that was created and change the permissions to the preset ACL named "POSIX_OPEN" and set the owner and group to "backup"
1740574362213.png

Navigate back to the Instance page and connect to the container with the Shell button
1740574422878.png

️️ Installing PBS
Once you are in the shell of the container, set the root user password with
passwd

Create the directory that will be used for the PBS datastore (This will be the directory mounted in the LXC creation plus a folder inside of that)
mkdir /mnt/pbs/data

Run the following commands
Bash:
# Update available repositories
apt update

# Install wget and nano
apt install wget nano

#Add the Proxmox repository key to the install
wget https://enterprise.proxmox.com/debian/proxmox-release-bookworm.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg

Edit the apt repository list and add the Proxmox repositories
nano /etc/apt/sources.list

Add the following to the sources list
Code:
# PBS no-subscription
deb http://download.proxmox.com/debian/pbs bookworm pbs-no-subscription

# security updates
deb http://security.debian.org/debian-security bookworm-security main contrib

Update the apt repositories and install PBS
Bash:
apt update
apt install proxmox-backup-server

Once the installation process if finished, should be able to connect to the TrueNAS IP but on port 8007 via HTTPS
Login using the root account and using the Linux PAM realm
1740574898168.png

Create a new datastore using the path that was created at the beginning of the PBS install
1740574928091.png
Press Add

The new dataset should be created and show the full size of the dataset that was created for the data in TrueNAS
1740574954193.png


This has been used heavily in my test environment and working perfectly as expected.
1740575006350.png

TrueNAS also has a scheduled snapshot created of the dataset every 6 hours and keep it for 3 days, hopefully to prevent accidental deletion or in an event where the PBS is subject to ransomware.



❓ If you have any feedback, please leave it below and I will try to accommodate this guide as best as I can
 
Last edited:
Nice job.

Been doing this with regular VMs and TrueNAS for a couple years.
TBH, it ain't great, but you do get a huge advantage from having the VM right on top of the storage.
TrueNAS is a cr@ppy VM host.

...
I don't container much. The below feedback is about VMs, not containers.

I encountered some significant considerations running a VM with ZFS inside a ZFS dataset.
Dunno if they matter for you.

When building a PBS VM that will use ZFS and run on TrueNAS, you should set disk sector size.
If you get this wrong, when you run zpool status, it will tell you about your mistakes.
Of course, you figure that out well after you've deployed the whole thing and started using it.

There's three levels that have a sector size: zvol, virtual machine device, and the zfs inside the vm.
  • zvol - (leave default) it allocates a sector size based on the size of the disk you create. for a 30gb disk, it chooses 4k. for a 2tb disk, it chooses 16k. it will warn you if you change it, and you will waste a lot of disk space and lose a bit of speed if you do so.
    • this virtual disk container is created in the truenas area below the storage pool.
  • virtual machine device - (change Disk Sector Size to 4k) this is where you specify the properties of the virtual disk.
    • TrueNAS > Virtual Machine > expand details > Devices > properties for the disk > Sector Size
  • zfs - (proxmox defaults seem ok) this is the VM's internal ZFS disk format itself, which has properties like recordsize and volblocksize
    • zfs get all
 
Last edited:
Nice job.

Been doing this with regular VMs and TrueNAS for a couple years.
TBH, it ain't great, but you do get a huge advantage from having the VM right on top of the storage.
TrueNAS is a cr@ppy VM host.

...
I don't container much. The below feedback is about VMs, not containers.

I encountered some significant considerations running a VM with ZFS inside a ZFS dataset.
Dunno if they matter for you.

When building a PBS VM that will use ZFS and run on TrueNAS, you should set disk sector size.
If you get this wrong, when you run zpool status, it will tell you about your mistakes.
Of course, you figure that out well after you've deployed the whole thing and started using it.

There's three levels that have a sector size: zvol, virtual machine device, and the zfs inside the vm.
  • zvol - (leave default) it allocates a sector size based on the size of the disk you create. for a 30gb disk, it chooses 4k. for a 2tb disk, it chooses 16k. it will warn you if you change it, and you will waste a lot of disk space and lose a bit of speed if you do so.
    • this virtual disk container is created in the truenas area below the storage pool.
  • virtual machine device - (change Disk Sector Size to 4k) this is where you specify the properties of the virtual disk.
    • TrueNAS > Virtual Machine > expand details > Devices > properties for the disk > Sector Size
  • zfs - (proxmox defaults seem ok) this is the VM's internal ZFS disk format itself, which has properties like recordsize and volblocksize
    • zfs get all
As "crappy" as truenas is at being a hypervisor, we've been running a few mission critical VM's that have heavy io (VMS software of 50+ 4k cameras) for almost a year now with zero hiccups or maintenance issues. I have some confidence that with new truenas 25.x, the LXC container featuring zfs dataset directory mapping would work quite well. I'm excited to give this a go. This allows a hunk of storage metal to have an extra layer of data protection and far more multi-purpose.
 
I have since worked with the latest TrueNAS Scale. It's KVM. Seems flawless. I'm hopeful its as solid as it seems.

My scorn for the older revs remains. It's a poor tool for hosting. There are interface bugs that I've seen on multiple installations. The GUI becomes unware of the state of the VM. The virtual bridge can drop a NIC and insist that its up, or even do awful unknown ARP things that require a reboot of the host to resolve. These are all deal-breakers.
 
  • Like
Reactions: Johannes S
It will be interesting, to see what the Integration of Incus into the next Scale Version will bring. Of course ProxmoxVE will still be the better virtualization plattform (more features than Incus etc, more flexiblity) but I can see a usecase for homelabbers and small businesses who don't want to invest much time in server house keeping. I'm also wondering whether Ixsystems shift their strategy from Enterprise customers to homelabbers/small businesses since they already abandoned their Kubernetes integration in favor of Docker
 
  • Like
Reactions: tcabernoch
I'm testing this, but i have a problem with networking. I would like to have sperate network for PBS lxc container. Truenas host has only one NIC, so i created 2 vlans in networking section.
nas.jpg
on continer i set NIC: eth0 (MACVLAN) (br192). Only when i enabled dhcp server for vlan192 on the router container gets an IP out of the defined range. Is there any way to set static ip of this container, so i could only access it on ip 192.168.0.50. I already tried multiple combinations. Adding bridged adapter br192 to lxc. Removing alias (IP) from br192 and few more. Is this possible, what I want to achieve?
 
I'm testing this, but i have a problem with networking. I would like to have sperate network for PBS lxc container. Truenas host has only one NIC, so i created 2 vlans in networking section.

What I did was setup two bridges, I assigned the IP alias for the TrueNAS to both of these bridges. One for management and one for data.

Then I assigned both bridges to the LXC. Then in the LXC, set the IP addresses statically.
 
How did you set static ip in LXC?

If the Debian install is normally using DHCP and the managment LAN is DHCP, you should be able to find it's IP in the shell and navigate to PBS normally, then set it from inside of PBS. Otherwise you will need to set it in the network config file and restart the networking service

/etc/network/interfaces
 
ok, nice. I've don't it. Inside PBS. A created bridge to eth interface, set IP and gateway and I enabled vlan aware. I also deleted alias IP under bridge, which is created in truenas. So final config:
1743077769256.png
and PBS:
1743077874691.png
Maybe vmbr0 bride is not needed, but I don't know where to set that eth0 interface is vlan aware.
 
Hi, not sure if this is the correct place to ask but I was having some issues with the permissions.

This is my first time using TrueNAS, I am evaluating its use as the main OS for a backup server with PBS virtualised since I really like PBS for PVE backups, but it's a bit useless for anything else.

Bash:
root@pbsTEST:~# ls -lashF /mnt
total 13K
512 drwxr-xr-x  3 root   root     3 Mar 31 18:58 ./
12K drwxr-xr-x 17 root   root    21 Mar 31 05:30 ../
512 drwxrwx---  2 nobody nogroup  3 Mar 31 18:56 target/
root@pbsTEST:~#

I added this mount via the "Disk" menu in the "Instances" menu. It corresponds to /mnt/priamry_pool/primary (where priamry_pool is the "root" dataset - and yes the misspelling is correct, it was unintentional but it is called that now).

Is anyone able to advise on how to fix the permissions here? I can't even ls inside the mount due to it being owned by nobody. Thanks in advance.
 
Not to state the obvious, but make sure your TrueNAS' ZFS drives aren't set to be backed up by Proxmox.

Otherwise Proxmox will try to backup the backup it's creating, and your VM will die a confusing death :)

Screen Shot 2025-05-03 at 22.16.57.png
 
  • Like
Reactions: Johannes S
Hi nice write up, i have just started my backup evaluation journey (i already use a pbs on an aging synology).

I am not sure what approach to take but had some questions:
  • i note that PBS can seel all of the disks managed by truenas, that seems a bad thing..... if i am mounting a dataset or zvol into the PBS LXC it would seem we want to hide all the disks from PBS?
  • did you evaluate this approach of mounting a data set vs creating a zvol?
  • an add to you guide - good practice is to add the sources to a new file called /etc/apt/sources.list.d/pbs.list (as an example) and not edit the main sources file - this ensure it never gets overwritten by the OS at any point (like a dist upgrade)
  • on your questions about permissions thanks for giving me a reason to go learn :-) so all containers run as apps/apps (568:568 on my system) as such the following works to secure the dataset to truenas_admin and unfortunately every incus container if it wants....... incus does support use of different hostids and user/group maps (so one dataset could be give to root in one container) but truenas doesn't have anything that exposes this in the ui and if you try and do it at command line truenas breaks it because it resets the map every time the container starts - grrr ( :cool:) but this is much better than setting other to rwx - which gives unconstrained access to any other user or group on the system...

    1746423000288.png
 
OP This is the worst possible way to setup PBS tbh, 1 hiccup on TrueNAS VM and you loose your storage and your backup/restore.
Best way according to my testing:
Separate VM's for PBS and TrueNAS with TrueNAS sharing an iscsi zvol directly to PBS, the block device is then formatted as an ext4 and mounted within PBS using fstab so it loads on startup.
Bam near 1:1 performance, with my testing I found that especially the garbage collection and verify tasks on iscsi take significantly less time then on NFS or SMB share
 
Last edited:
OP This is the worst possible way to setup PBS tbh, 1 hiccup on TrueNAS VM and you loose your storage and your backup/restore.
Best way according to my testing:
Separate VM's for PBS and TrueNAS with TrueNAS sharing an iscsi zvol directly to PBS, the block device is then formatted as an ext4 and mounted within PBS using fstab so it loads on startup.
Bam near 1:1 performance, with my testing I found that especially the garbage collection and verify tasks on iscsi take significantly less time then on NFS or SMB share

This is running TrueNAS on baremetal with a PBS container, there is no VMs involved?
 
  • Like
Reactions: Johannes S
Just a thought... since TrueNAS Scale is based on Debian as well as PBS... why not install the PBS directly on the TrueNAS system (without LXC or VMs)?
 
why not install the PBS directly on the TrueNAS system (without LXC or VMs)?
iXsystems positions SCALE as an appliance.

Basically Proxmox is doing the same with PVE. But Proxmox is "open"; the underlying Debian is unrestricted and you can do "everything" with it. Including strange things like installing Docker, for example. (No, please do not do that ;-) )

iXsystems enforces their known state of the system. Every single bit you change on the OS level by circumventing their middle-ware is problematic as the chance that this will be "undone" by SCALE is really high.

From my point of of view it is just impossible to install a complex software directly on SCALE. Note that installing PBS in parallel onto PVE is very well supported (while not recommended).
 
for your install command once the shell is loaded on the CT - a one liner to do all the steps you outlined @PwrBank
iXsystems enforces their known state of the system. Every single bit you change on the OS level by circumventing their middle-ware is problematic as the chance that this will be "undone" by SCALE is really high.
nothing in what the OP did does this

i thnk you are having a kneejerk anti-truenas situation, yes ix-systems opinonated lockdown can be infuriating, but it is utterly irrelevant to the thread at hand, there is no "installing complex software on to scale" there is using incus and LXC for the purpose for which it was designed, the container doesn't even run prvilieged.

pbs running in a incus LXC on truenas 25.04 is quite intriguing - i just moved my pbs from a synology DS1815+ using CIFS as a backend, to truenas using a ZFS dataset as a backend, the idea of using snapshots and exports should work reasonablly well as a last local resort (first will be a pbs replica i will setup on another NAS

and to really mess with you my truenas is virtualized on proxmox ;-)

you really just posted to piss on his parade, you know that right? we don't care you don't want this, good for you! gues what no one is going top make you do this, phew, huh.

1747119686709.png
 
Last edited:
nothing in what the OP did does this
Well, I was just answering to "install the PBS directly on the TrueNAS system (without LXC or VMs)" which implied for me "directly on the OS". Sorry if that impression was wrong.

you really just posted to piss on his parade
Ooops - not my intention.

TrueNAS is a fine product, and I've never stated otherwise. (( I had used it for years productive (at home), several years ago (pre Corral desaster) and I have a test-SCALE running... ))