Proxmox Backup Server is the solution you are looking for (live boot), you cannot live boot from ZFS snapshots.
Now you have two option: use Proxmox Backup Storage natively on your backup storage in case your storage is dedicated to your cluster (professional setup)
For home labs, you may want to share your backup storage with other things, TrueNAS can host a PBS VM, you provide hat VM with whatever amount of storage you need to for your server backup.
When your node 1 dies, you can restore from backup with live boot relatively instantly.
Note the enterprise method is bare metal PBS Server.
If you do anything I am saying here, you are on your own.
SETUP 1.0
This is the setup I have used for a couple of years at home, and it worked..
- I created a VM in Truenas SCALE
- I followed the guide for installing PBS into the virtual machine. You can use any debian derivative, I chose debian bookworm because it's the same you find in the PBS installation ISO. I strongly discourage you from using TrueNAS CORE for this because vms are too slow in some setups. Instruction are here: https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-server-on-debian
- Inside the virtual machine, I mounted a share from TrueNAS
- I created the dataset in PBS by specifying the mountpoint that I chose inside the PBS VM (in my case /mnt/backup because I have a huge imagination)
Now the poin is what kind of share you use. Having tested NFS to be very slow, I went with SAMBA. This has been working and I made sure to create a lot of snapshots on the TrueNAS dataset. So I was sure to have at least one good snapshot if needed be. It is not ideal, but it is way faster than NFS because IT DOES NOT SYNC AS OFTEN as NFS. This is a NFS on ZFS issue that is well known by anybody who uses NFS on ZFS vs SAMBA or ISCSI for that matter. Probably iSCSI is better then SAMBA, but I wanto to be able to sync the dataset with google drive (encrypted) just for FUN and to see if it works, which I cannot do from trueNAS if it's iSCSI.
NFS writing performance, in fact, was like 20-40MB/s on >1Gbit ethernet without a proper ZFS SLOG device, which should be a small but very VERY fast NVME SSD with mirroring, Mirroring is required if you don't want to loose the entire pool in case the SLOG device fails!
Unless, of course, you have a NVME ZFS pool, in which case it is going to be faster, I don't know how much, and still I would prefer to use a proper SLOG device. Now, if you are not very expert in ZFS, don't try messing up with SLOG devices, of course.
If you are afraid of SAMBA (actually not a bad stance) or iSCSI (same), go for NFS.
SETUP 1.1
This is a setup I have used with a customer: he wanted to go this way despite the fact that I told him not to do it, because it could be moderately risky and very slow, as I said before. Actually the fact that it is so slow, makes it riskier.
The same as SETUP 1 but the VM was on a proxmox virtual environment machine and the
shared DS was via NFS.
Now, of course the problem here is what happens when the mount point goes down because the TrueNAS core this customer has goes down (which cannot happen in SETUP 1.0 because they are the same machine). So I created a script that checks if the mount point works and in case continuosly try to remount it.
This is safe unless the share goes down mid-writing, because the backups on the PVE hosts will just fail...
SETUP 2
TrueNAS might support jailmaker in the future in order to create sandboxes. In the meantime, the services and tools needed for creating sandboxes has been shipped from SCALE 24.04:
Again: nothing I am saying here is supported by either trueNAS or Proxmox!
So what did I do, because I have loads of snapshots and backups?
- I created a debian sandbox with jailmaker in Truenas SCALE.
- I followed the guide for installing PBS into the sandbox.
- I followed the instructions to mount via rebind the original dataset into the sandbox from jailmaker docs.
- The mount point still was /mnt/backup, so I just copied over the content of /etc/proxmox-backup from the old vm to the new sandbox and setup the same IP, also according to the jailmaker docs.
- SOLVED THE BIG PROBLEM that follows
THE BIG PROBLEM IF YOU ARE MOVING FROM SAMBA TO LOCAL/NFS/iSCSI or VICEVERSA
Now the "big" problem is as such: if you are coming from a dataset that was mounted via SMB, typically you will mount from TrueNAS with
en-US.UTF-8. If you mount NFS it uses the destination charset, which in TrueNAS SCALE unfortunately is
C.UTF-8 (I don't know for the love of god why they changed it, it was en-US.UTF-8 in TrueNAS CORE!)
Now, changing the locale in truenas is out of the question, I don't want to break anything.
In en-US.UTF-8 (or other European languages, I don't know shit about Japanese and Chinese and other charsets) the character ":" (UTF-8 number 58) in SAMBA becomes UTF-8 character numer 61474 in NFS, which is "invisible".
Because the metadata in the PBS datastore are named something like
ct/107/2024-10-27T11:00:19Z for containers or
vm/1072024-10-27T11:00:19Z for virtual machines, the backups literally "disappear".
So, I had to create a script to rename them accordingly. Now everything is working with SETUP2, and a lot faster than before, pretty much like bare metal.
In fact, the other directories do not have any special characters. I am mainly talking of hidden directory .chunks that cointains the data, where the names of the files and dirs are just made of hexadecimal digits.
Now, of course I need to check at every major upgrade in truenas CORE if the artists at ixSystems decide to change the default locale again, but it's pretty easy: if I see the backups disappear, I know where to look.