PROXMOX and TRUENAS on bare metal

Dec 12, 2024
39
0
6
Hello,
I have a PROMOX 8.3 cluster HA with 3 nodes (N1, N2 and N3) and using zfs on local disks .
I want to use my truenas named TN on bare metal with zfs snapshots coming from the other nodes .
Is it possible and how to configure ?
Let's suppose I have defined a vm on node N1 and I used zfs local disk .
Can I define a snapshot on truenas TN ? This snapshot can be used in case of crash of node N1 ?
I mean if node N1 crashes, Proxmox HA will start the VM using the snapshot on truenas ?
Should the truenas TN belong to the HA cluster ?

How to use and configure PROXMOX 8.3 and truenas TN to use zfs over iscsi ?

Thanks
 
I think what you are looking for is Proxmox Backup Server as that can do ‘live boot’, you could still use TrueNAS as the storage backend if you are splitting the storage for other purposes.
 
Proxmox Backup Server is the solution you are looking for (live boot), you cannot live boot from ZFS snapshots.

Now you have two option: use Proxmox Backup Storage natively on your backup storage in case your storage is dedicated to your cluster (professional setup)
For home labs, you may want to share your backup storage with other things, TrueNAS can host a PBS VM, you provide hat VM with whatever amount of storage you need to for your server backup.

When your node 1 dies, you can restore from backup with live boot relatively instantly.
 
Proxmox Backup Server is the solution you are looking for (live boot), you cannot live boot from ZFS snapshots.

Now you have two option: use Proxmox Backup Storage natively on your backup storage in case your storage is dedicated to your cluster (professional setup)
For home labs, you may want to share your backup storage with other things, TrueNAS can host a PBS VM, you provide hat VM with whatever amount of storage you need to for your server backup.

When your node 1 dies, you can restore from backup with live boot relatively instantly.

Note the enterprise method is bare metal PBS Server.
If you do anything I am saying here, you are on your own.

SETUP 1.0
This is the setup I have used for a couple of years at home, and it worked..

  1. I created a VM in Truenas SCALE
  2. I followed the guide for installing PBS into the virtual machine. You can use any debian derivative, I chose debian bookworm because it's the same you find in the PBS installation ISO. I strongly discourage you from using TrueNAS CORE for this because vms are too slow in some setups. Instruction are here: https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-server-on-debian
  3. Inside the virtual machine, I mounted a share from TrueNAS
  4. I created the dataset in PBS by specifying the mountpoint that I chose inside the PBS VM (in my case /mnt/backup because I have a huge imagination)
Now the poin is what kind of share you use. Having tested NFS to be very slow, I went with SAMBA. This has been working and I made sure to create a lot of snapshots on the TrueNAS dataset. So I was sure to have at least one good snapshot if needed be. It is not ideal, but it is way faster than NFS because IT DOES NOT SYNC AS OFTEN as NFS. This is a NFS on ZFS issue that is well known by anybody who uses NFS on ZFS vs SAMBA or ISCSI for that matter. Probably iSCSI is better then SAMBA, but I wanto to be able to sync the dataset with google drive (encrypted) just for FUN and to see if it works, which I cannot do from trueNAS if it's iSCSI.

NFS writing performance, in fact, was like 20-40MB/s on >1Gbit ethernet without a proper ZFS SLOG device, which should be a small but very VERY fast NVME SSD with mirroring, Mirroring is required if you don't want to loose the entire pool in case the SLOG device fails!

Unless, of course, you have a NVME ZFS pool, in which case it is going to be faster, I don't know how much, and still I would prefer to use a proper SLOG device. Now, if you are not very expert in ZFS, don't try messing up with SLOG devices, of course.

If you are afraid of SAMBA (actually not a bad stance) or iSCSI (same), go for NFS.

SETUP 1.1
This is a setup I have used with a customer: he wanted to go this way despite the fact that I told him not to do it, because it could be moderately risky and very slow, as I said before. Actually the fact that it is so slow, makes it riskier.

The same as SETUP 1 but the VM was on a proxmox virtual environment machine and the shared DS was via NFS.

Now, of course the problem here is what happens when the mount point goes down because the TrueNAS core this customer has goes down (which cannot happen in SETUP 1.0 because they are the same machine). So I created a script that checks if the mount point works and in case continuosly try to remount it.
This is safe unless the share goes down mid-writing, because the backups on the PVE hosts will just fail...

SETUP 2
TrueNAS might support jailmaker in the future in order to create sandboxes. In the meantime, the services and tools needed for creating sandboxes has been shipped from SCALE 24.04:
Again: nothing I am saying here is supported by either trueNAS or Proxmox!

So what did I do, because I have loads of snapshots and backups?
  1. I created a debian sandbox with jailmaker in Truenas SCALE.
  2. I followed the guide for installing PBS into the sandbox.
  3. I followed the instructions to mount via rebind the original dataset into the sandbox from jailmaker docs.
  4. The mount point still was /mnt/backup, so I just copied over the content of /etc/proxmox-backup from the old vm to the new sandbox and setup the same IP, also according to the jailmaker docs.
  5. SOLVED THE BIG PROBLEM that follows

THE BIG PROBLEM IF YOU ARE MOVING FROM SAMBA TO LOCAL/NFS/iSCSI or VICEVERSA

Now the "big" problem is as such: if you are coming from a dataset that was mounted via SMB, typically you will mount from TrueNAS with en-US.UTF-8. If you mount NFS it uses the destination charset, which in TrueNAS SCALE unfortunately is C.UTF-8 (I don't know for the love of god why they changed it, it was en-US.UTF-8 in TrueNAS CORE!)
Now, changing the locale in truenas is out of the question, I don't want to break anything.

In en-US.UTF-8 (or other European languages, I don't know shit about Japanese and Chinese and other charsets) the character ":" (UTF-8 number 58) in SAMBA becomes UTF-8 character numer 61474 in NFS, which is "invisible".

Because the metadata in the PBS datastore are named something like ct/107/2024-10-27T11:00:19Z for containers or vm/1072024-10-27T11:00:19Z for virtual machines, the backups literally "disappear".

So, I had to create a script to rename them accordingly. Now everything is working with SETUP2, and a lot faster than before, pretty much like bare metal.

In fact, the other directories do not have any special characters. I am mainly talking of hidden directory .chunks that cointains the data, where the names of the files and dirs are just made of hexadecimal digits.

Now, of course I need to check at every major upgrade in truenas CORE if the artists at ixSystems decide to change the default locale again, but it's pretty easy: if I see the backups disappear, I know where to look.
 
Last edited:
TrueNAS might support jailmaker in the future in order to create sandboxes.
already in 25.04 :)

but the better question is do you have need for Truenas OUTSIDE of the backup server? Proxmox and Truenas are converging in terms of functionality to the point it may be preferrable to run one OR the other, not virtualized inside one another. If the intention is to continue running proxmox, it is POSSIBLE to do what you suggest (zfs send snapshots to a truenas hosted zfs volume) but there is no easy way to recover those snapshots, limiting their usefulness in practice. As others pointed out, using PBS is the better option not only because it is integrated into the Proxmox workflow but also because it has pretty effective deduplication not dependent on the target filesystem.
 
already in 25.04 :)

but the better question is do you have need for Truenas OUTSIDE of the backup server? Proxmox and Truenas are converging in terms of functionality to the point it may be preferrable to run one OR the other, not virtualized inside one another. If the intention is to continue running proxmox, it is POSSIBLE to do what you suggest (zfs send snapshots to a truenas hosted zfs volume) but there is no easy way to recover those snapshots, limiting their usefulness in practice. As others pointed out, using PBS is the better option not only because it is integrated into the Proxmox workflow but also because it has pretty effective deduplication not dependent on the target filesystem.
Yes. In fact, I have never said I am virtualizing TrueNAS.
I did and it worked perfectly for a while. The only reason I am not virtualizing TrueNAS, though, is that consumer processors do not have enough PCIe lanes for my liking. So, I have 2 PVE machines and 2 bare metal TrueNAS machines (one primary and one backup). Please note this is a home lab setup. it would be a horrible waste of money if I needed a FIFTH machine just for PBS, I am not Warren Buffet. Besides, TrueNAS is a much better way to manage other services and data, like SMB shares, iSCSI shares, NFS shares etc. Not to mention how resilient and easy to restore TrueNAS is. PBS and PVE? You need the command line both for backing up and restoring a host.

So, I STRONGLY disagree about the idea that Proxmox and TrueNAS can substitute one another, except in very simplistic cases, as PVE and PBS are not storage appliances while TrueNAS is not a virtualization appliance: they change too much (plugins yes, plugins no, bhyve here, kvm there, kubernetes... oh no ops now it's docker, etc). Yes there is significant overlapping in sheer functionalities but there are major differences in ease of management and refinement.
If you ask me, the only things that are worth virtualizing on TrueNAS are:
  • native docker containers that PVE do not have, but hey you can still spin up a vm/ct in PVE and run from there
  • lxc containers now: still not perfectly overlapping. Even though I have not seen any interface yet, it is highly doubtful you will ever get the ease of management that you get with PVE (backup, migration, etc); of course if you have a few lxc containers or virtual machines that almost never change, you won't spin up a PVE instance for THAT, TrueNAS is fine
  • PBS is an exception to the lxc container case where having PBS as a Truenas appliance is way superior, especially if you have only one instance of PBS, you want to leverage TrueNAS management capabilities like snapshotting, replication, cloud backup; this is even better if you already need TrueNAS for other stuff that Promox does not have. And PBS does not need much in terms of backup, just rsyncing /etc/proxmox-backup.

Now, I might still go back to virtualizing my primary NAS if I decide to buy something like a ThreadRipper in the future... but I will never virtualize the TrueNAS instance where I keep my secondary backups.

By the way, if you don't have some alternative storage, where are you backing up the PVE hosts themselves?
 
Last edited:
  • Like
Reactions: floh8
he only reason I am not virtualizing TrueNAS, though, is that consumer processors do not have enough PCIe lanes for my liking.
Now, I might still go back to virtualizing my primary NAS if I decide to buy something like a ThreadRipper in the future

While I dont really understand what is meant by this, I concur wholeheartedly that you shouldn't virtualize truenas inside a PVE environment. its a nested dependency that doesnt really serve any real benefit. You posit there are benefits to running Trunas and I don't disagree, but if you're running storage on the same physical hardware as PVE you should let PVE handle storage aggregation duties so you're not creating a circular dependency.

lease note this is a home lab setup. it would be a horrible waste of money if I needed a FIFTH machine just for PBS, I am not Warren Buffet. Besides, TrueNAS is a much better way to manage other services and data, like SMB shares, iSCSI shares, NFS shares etc. Not to mention how resilient and easy to restore TrueNAS is. PBS and PVE? You need the command line both for backing up and restoring a host.
the question posed was if sending snapshots to truenas was possible. not sure what your budget has to do with it... no one here demands or is entitled to your justifications for deploying what you're deploying ;)

PBS is an exception to the lxc container case where having PBS as a Truenas appliance is way superior, especially if you have only one instance of PBS, you want to leverage TrueNAS management capabilities like snapshotting, replication, cloud backup; this is even better if you already need TrueNAS for other stuff that Promox does not have. And PBS does not need much in terms of backup, just rsyncing /etc/proxmox-backup.
this is actually possible, and may be a workable solution.
 
While I dont really understand what is meant by this
I merely meant that truenas on consumer hardware (up to basic xeons) has not enough PCI lanes to build a storage solution.
the question posed was if sending snapshots to truenas was possible. not sure what your budget has to do with it... no one here demands or is entitled to your justifications for deploying what you're deploying ;)
My answer was meant to show with a working example (mine) that yes, you can send snapshots of a ZFS dataset containing PBS backups to Truenas, and it works well. It is also significantly faster than PBS replication and you don't need to maintain another instance of PBS, provided you take the snapshots at a time where you are sure that nobody is writing to the PBS dataset. Well actually you could use pve hooks to create a snapshot before backup, but that's rather advanced.

The rest I was responding to you, because it sounded like you were implying Proxmox and TrueNAS are interchangeable. Nope.
 
Well you guys seem to know what you are doing.

I have aa new EPYC based server i have built. I have spent the last 4 months installing various baremetal OS on it (i had brain surgery in the middle of that time period - which is why it still sits uncomissioned. I can't decide whic OS to use on it to use and need some outside perspctive.

Assumptions:
  • i have an aging synology NAS used for file storage and backup (not compute) this content will be moved to the new server.
  • i make use of Domain Join - i need this to still work
  • i have a 3 node NUC proxmox / ceph cluster for my general low key compute (docker swarm VMs, windows DCs, home asisstant) i will be keeping this
  • any compute (VMs / Containers) done on the new server will be for thing the cluster can't do (nvidia cards, high compute VMs etc)
  • a zimacube pro will be used as a place to recieve ZFS snapshots that are critical to backup / maybe run second pbs server
Where my head is at:
  • proxmox
    • love proxmox for being open, cutomizable, great for VMs, don't use CTs likely never will (see docker swarm VM above for why)
    • dislike promox for NAS (aka file serving) no easy slic way to do that, no common approach adopted by the communit
    • love pbs for backing up VMs, the one CT i ahve, my raspbery pi's, my ceph volume
    • when i tried truens virtualized, despite doing everthing i should, on one boot proxmox claimed the ZFS disks and basically trashed them from a truenas perspective, someone else also has reported it (i won't ever virtualize truenas on promox)
  • truenas
    • love truenas for zfs management, tight integration and gui for domain joined / smb server / on zfs / snaphsot
    • disklie truenas when it blocks me from doing things i need (main one being nvidia drivers on host are not self installable)
    • dislike truenas's integrated backup approach, philsophy and toolset - can't easily backup to azure for example
Questions:
  1. any insights based on what you chose to do?
  2. can i do nested virt with a proxmox VM running on truenas - where i can pass the GPUs into proxmox and then have proxmox pass it through to various VMs, CTs (especially if the gpu supports true vgpu?)
 
I run TrueNAS virtualized in Proxmox. The key is that you MUST pass through an entire HBA or drive controller, You can't just pass through disks, or it won't work properly. But if you pass through an entire HBA, then there is no way Proxmox can claim the disk because it will never see it in the first place. I have an ASMedia 1166 based adapter that I pass through to TrueNAS and TrueNAS behaves as if it is operating on bare metal. It can see the smart data, etc for for the six drives attached to the card.

I tried running TrueNAS scale bare metal and the virtualization and the docker app features just didn't come close to meeting my needs. So I run it virtualized on Proxmox, and I really don't use the apps feature in TrueNAS as all. I installed portainer and do everything using my own docker compose YAML files. I just bind mount my docker volumes to the appropriate data sets on TrueNAS. Works great.

As far as backing up to Azure, you just have to master Rsync or Rclone.
 
  • Like
Reactions: UdoB and scyto
any insights based on what you chose to do?
Honestly, other then "likes" and "dislikes" I'm not sure you have a problem to fix. sounds like you already have everything you need set up- whats wrong with your current setup? just add your "big new node" to the cluster and call it a day.

can i do nested virt with a proxmox VM running on truenas
you can. but you shouldn't. its a nested dependency. many homelabbers do, but I would advise against deploying in production.
 
  • Like
Reactions: UdoB and scyto
But if you pass through an entire HBA, then there is no way Proxmox can claim the disk because it will never see it in the first place

As far as backing up to Azure, you just have to master Rsync or Rclone.
I assure you i passed the entire HBA ports through. It still managed to grab the disks on one boot. I have had more than one person tell me it happened to them. This may be because I made a mistake, it maybe because of the nature of the HBA (mobo FCIO ports in SATA mode). Either way I won’t be risking it again. And I have chatted with others where it happened.

Issues with rclone and rsync isn’t doesn’t appear to be incremental and versioned backups. (Unless you are suggesting I rsync my pbs store?)
 
@alexskysilk thanks - the issue is there appears to be no friendly NAS container for Proxmox that gives me all the feature I need / or at least I can find no good write up. Cockpit didn’t work properly. Zamba documentation is obtuse. I may have to accept running my own Debian CT and weeks of trial and error learning how to tweak it for domain joined SMB.
 
I assure you i passed the entire HBA ports through. It still managed to grab the disks on one boot. I have had more than one person tell me it happened to them. This may be because I made a mistake, it maybe because of the nature of the HBA (mobo FCIO ports in SATA mode). Either way I won’t be risking it again. And I have chatted with others where it happened.

I am not sure I understand what this means. BUT, running Proxmox in TrueNAS is not something I would recommend. As I said previously, TrueNAS is not a virtualization or containerization appliance. Those are a functionalities that are there and work nicely for basic needs, and in fact I was "suggesting" a PBS container for small setups because I have tested it and works beautifully. However, TrueNAS does its own thing with system configuration files, overriding, rewriting, and moving around stuff quite freely. It is stated by iXsystems not to mess around with the underlying system too much because of that.
On the other hand, Proxmox is pretty much a debian OS with not many sheninanigans goin on. So, if you blacklist the drivers (or, better, the device itself via device ids, which is way safer), the linux kernel should never claim the card, let alone the drives as it will never see them in the first place.
Are we saying that this does not work? I mean, the blacklisting of devices? That is new to me.
Then of course, the fact that the card gets passed through correctly is a totally different issue.
 
  • Like
Reactions: UdoB
That still doesn’t solve my basic dilemma. With Proxmox I have yet to understand how to make it a great NAS (network attached storage).

With truenas I get a great turkey NAS OS that limits my ability to use arbitrary hardware / use hardware modes they don’t support - for example vGPU (not pass through, true vGPU with Nvidia drivers of my choosing). I don’t care about modifying the host software beyond the drivers I need. The vGPU will be used for machine learning across one or two VMs or containers at most.

To be super clear 90% of my VMs will run on my existing NUC based Proxmox / ceph cluster. Those VMs will never be migrated to the new ‘big server’ irrespective of what OS I put on it. Of input Proxmox on it I still wouldn’t make it part of the Proxmox cluster I already have.

Interesting no guide I saw said to blacklist the device IDs. So I guess my mistake may have been putting the drives into the machine before the truenas vm was in place and the sata port devices added to the VM config? Or booting the machine after that without doing device ID black listing?
 
Last edited:
That still doesn’t solve my basic dilemma. With Proxmox I have yet to understand how to make it a great NAS (network attached storage).
My dilemma is: why is this guy trying to transform Proxmox into a NAS? It's 100% my fault because I have a simple mind: TrueNAS is not a virtualization appliance; Proxmox is not a NAS appliance.

Interesting no guide I saw said to blacklist the device IDs. So I guess my mistake may have been putting the drives into the machine before the truenas vm was in place and the sata port devices added to the VM config? Or booting the machine after that without doing device ID black listing?
Definetely blacklisting device IDs is not possible straight away, my bad: what I meant - sorry again - was to assign a specific driver (vfio) to a specific vendor/device id. I haven't tried it past Promox 7, because a year ago I moved TrueNAS to a standalone server (because of RAM "deficiency"), but it should still work with Proxmox 8.
Besides the HBA, this was also mandatory for me vs blacklisting the radeon driver, because I had two Radeon "cards", one being the integrated one in a Ryzen processor. So I followed something like this:
https://www.heiko-sieger.info/runni...with-vga-passthrough/#Two_graphics_processors
Oh, by the way, once I learnt the trick, I also applied it to USB cards, parallel PCI cards (yes, parallel and PCI in 2024, don't ask), and serial cards (again)...
I don't think I know of a case where the OS suddenly took ownership of the device.
 
Last edited:
My dilemma is: why is this guy trying to transform Proxmox into a NAS? It's 100% my fault because I have a simple mind: TrueNAS is not a virtualization appliance; Proxmox is not a NAS appliance.

Becuase i live in the real world where compromises need to be made - i can only afford one beefy sever.
So the options are:
  • install proxmox and layer nas on top:
    • could be truenas vm (passthrough ZFS
    • could be container doing smb / domain / join
    • could install cockpit natively (hahaha, yeah i am doing that)
    • use proxmox for my experimental VMs the proxmox cluster i have can't do
  • install truenas for the backup and ZFS managementand use its vm's for things i only cant do on my proxmox cluster
    • buy multiple GPUs as truenas doesn't let me use vGPU drivers (to split card in multiple functions)
    • pass trhough coral devices because truenas won't let me install and doesn't ship with coral drivers
I am trying to evaluate the different options and pros and cons. Different folks have take different approaches, you have take one path, others have take different paths.

I want to learn from those differences.

Thanks for the clarification on the blacklisting, i wish people said that was critical on all the reddit posts and forum posts where this topic is endlessely revisisted, all they say is "passthrough and you will be ok" which is patently often true, but not true in many many cases.

thanks for the help