[SOLVED] Add folders in root of storage?

Nov 16, 2022
124
18
23
Sweden
Hi,

So I finally ditched the QNAP setup and got a real (too expensive) server for the sole purpose of running PBS. All is great, and speed is woosh!

One question though, I want to be able to utilize all the 62 TB storage I have on the new server, even though I made everything available to PBS through ZFS (/backupstorage). I know that PBS creates a .chunk dir and some other folders in that ZFS mount, but question is; can I add more folders without disturbing PBS, or do I need to create totally separate ZFS pools and divide the disks so that I have one dedicated for PBS, and one for everything else?

I hope it makes sense.

My goal is to be able to do PVE backups to the same storage but in a different folder e.g. /backkupstorage/pve.

Thanks!
 
You shouldn't store stuff in the folder of the PBS dtastore. But it's totally fine to store stuff in other folders.
I would recommend to not use the pools root, but create datasets for everything you want to store. For example a "YourPool/BPSDatastroe1" for your first PBS datastore and then other datasets for all the other stuff you want to store.

What do you mean by "PVE backups"? PVE Host backups (configs as files/folders and system disks as block devices) for example can also be stored on the PBS datastore using the proxmox-backup-client.
 
OK, so creating a dataset seems pretty straight forward, but will the dataset share the storage of the whole pool? I mean if I have

/backupstorage/dataset1
/backupstorage/dataset2

...where /backupstorage is 62 TB, will both dataset 1 & 2 see the 62 TB and share it in between or do I have to create a "hard" limit for each dataset?

With PVE backups I mean regular vzdumps from Proxmox VE. I plan to do those as well on top of PBS backups. I also have an rsync script to backup stuff on the PVE host, but maybe using the backup-client is a better option...
 
OK, so creating a dataset seems pretty straight forward, but will the dataset share the storage of the whole pool? I mean if I have

/backupstorage/dataset1
/backupstorage/dataset2

...where /backupstorage is 62 TB, will both dataset 1 & 2 see the 62 TB and share it in between or do I have to create a "hard" limit for each dataset?
Yes, they will dynamically share the full 62TB, unless you set a quota for those datasets. But keep in mind that a ZFS pool shouldn't be filed more than 80% or it will become slow and fragment faster (which is bad as you can't defrag a ZFS pool). So if your pool shows a 62TB capacity, you shouldn't fill it more than 50TB.
 
Last edited:
  • Like
Reactions: enoch85