Best practice for using 2 SSDs with a PBS VM or LXC

epionier

New Member
Oct 21, 2024
11
0
1
Hello,

first of all, I know the recommendation is using PBS as a standalone server but I already have a PBS Off-Site separately.

I have a server where I will install two SATA SSDs where I (exclusively) want to backup VMs / LXCs from the server onto a datastore on it + /etc/pve (from PVE) + /etc/proxmox-backup (from PBS)
The disks should be used as ZFS mirror?
The PBS (VM or privileged LXC (depending on recommendation) ) will run on another SSD disk where PVE is installed on.

What is the best practice (pros / cons) of using the SSDs? (especially that in worst case I can use the Backup SSDs as fast and as save as possible in another server (plug&play))

1. Create a ZFS pool of the two SSD disks in PVE and "share" it with PBS
-> in what way: VirtioFS, Mount Point or via SMB...
2. Passthrough disks to a PBS VM, create there a ZFS from the passthroughed disks
(no other SATA devices used in the server)

Hope someone can share his experience!
 
If you want a "virtual" PBS I'd have PVE manage the disk(s) as a ZFS pool and then give a PBS CT some storage from that.
CTs (with ZFS storage) use datasets. These are fast and efficient (ZVOLs that VMs use aren't) and you can easily access them from the host managing the pool too. Useful in emergencies.
Easy of access is similar to passing a whole disk to a VM but this way you can also use the pool for normal PVE backups or other such things if needed and thus have more control over your storage.
CTs also use fewer resources because memory is a quota and not an allocation among other things. I see no reason to make it privileged.
I fear that using PBS directly on the host might lead to packaging/update issues due to different release strategies.

TLDR: I'd pick option 1 but with a normal virtual disk given to a PBS CT.
 
Last edited:
@_gabriel: Thanks for your input but I do not want PBS installed on top of PVE Installation but preferable as a LXC. So I keep data separate and can e.g. backup PBS LXC itself easily. I prefer the rule do not install anything on the host if I don`t absolutely have to.

@Impact: Can you explain this a bit further? I come from ESXI (VMFS) and only use Proxmox private so far but with EXT4 so no ZFS experience.
I just looked up and saw that ZFS needs some kind of dataset to work and can not write data plain on it (like ext4).
So when I create a filesystem-dataset on PVE (because ZFS needs so form of dataset and can not store data plain) in what kind of way do I add this to the LXC? Is it a GUI option oder terminal solution?
 
I feel like there's people who can describe this much much better without likely misusing the terms but I'll try.
When you create a ZFS pool via node > Disks > ZFS your pool with have a main dataset already. You can see it and its mount point with zfs list.
This mount point is totally usable as is to store files. If you select Add Storage during creation PVE automatically creates a ZFS storage with it for you.

You can add additonal datasets as guest storage via Datacenter > Storage > Add > ZFS. Note that ZFS is a special type of storage.
Creating additional datasets is useful to separate things as different datasets can have different properties. Different compression for example.

For example you can create a dataset like this
Bash:
zfs create nameofpoolhere/backups
for non-PBS backups (to back up the CT itself for example) and then add it as Directory storage via Datacenter > Storage > Add > Directory.
Select the Content Types you want. You can also just use the main dataset for this.

To use it all you need to do is to select the ZFS type storage when creating a disk and PVE does the rest such as creating a dataset or ZVOL behind the scenes.
You can do almost everything in the GUI. Creating a custom dataset needs to be done in the CLI but it's just one command and entirely optional.
You can think of a Dataset as a directory and a ZVOL as a volume/disk. The ZFS storage type is for volumes/disks and Directory is for files.
There's also file based virtual disks you can create on a Directory storage but I do not recommend them anyways.

You see this is not trivial to properly explain in a few words. I hope that answered some of your questions though. If not let me know.
 
Last edited:
@Impact: Thank you for your time writing this together. I now understand the general principles of using ZFS pools/datastore.
To a certain extent I will have to try different methods when I am ready to go.
My only question remaining is: Why not using the datastore (the one automatically created or a manually created one) as a Mount Point in PBS LXC?
You say you do not recommend it and prefer using a virtual disk but why? Sorry for the question but in respect of using a backup disk in worst case it seems to me preferable to have raw data on it than a virtual disk?

A Datastore in PBS terms is - as far as I experience - just a folder to store data in, that`s why I am wondering.
 
Last edited:
I do not recommend file based disks (as in a virtual disk/mount point, not to be confused with a bind mount, on a Directory storage) as they are slow and don't support snapshots for CTs. A mount point (CT virtual disk) from a ZFS type storage is fine and is what I do recommend.
 
Last edited:
Ah ok, so I got you wrong. So a virtual disk for LXC files (not on zfs volume) and a mount point from ZFS PVE datastore for backup. Thanks!
I did not have the same terms in mount point as a virtual disk but somehow it is. Now it makes sense to me.
 
What the CT section calls a mount point is basically a virtual disk too depending on the storage. In the case of ZFS it's a dataset. When you have a Directory storage it could be a qcow2 or raw file. This would be a file based virtual disk I recommended against then. Just depends.
You would create that mount point on the ZFS type storage. You can even give your PBS disks from different storages. For example you can have the root disk on a SSD and the data on a HDD.
The backup Directory type storage is only used for PVE itself as target for PVE backups. Similar as a PBS storage would be a target.
Bind mount points are shared directories between the node and a CT. They are a pain though and you don't need it for this.
I hope that makes sense.
 
Last edited:
Yes that makes sense. I understand using storage in datacenter "ZFS" type not "Directory".
Four questions remain:
1. In worst case when I would unplug the two backup-ssd (that are a created zfs pool in proxmox) and put them in another server with Proxmox or even plain Debian installed, will the existing zfs pool (and all data on it) on backup-ssd on the new system be recognized without hassle?
2. Do I need a privileged CT or is a unprivileged CT sufficient for using the zfs mount point?
3. Can multiple CT access the same datastore / mount point simultaneously or is this restricted to one?
4. Is compression (lz4) with ZFS a performance downer when amount of storage is not a problem?

Sorry for all these questions but coming from ESXI with proprietary VMFS file system I am looking forward to the freedom that Proxmox offers.
 
1. Yes you should be able to import the pool via zpool import.
2. No privileged container needed.
3. That's what bind mount points are for.
4. Depends on the CPU but very unlikely unless you need to write with > 1G/s or so. The lz4 command has a included benchmark. The PBS client has one too. Also see benchmarks here: https://github.com/lz4/lz4?tab=readme-ov-file#benchmarks
 
Last edited: