Backup storage for nested PBS

aluputi

Active Member
Jul 12, 2017
11
0
41
37
Hello,

I run nested PBS as VM in PVE for servers that aren't dedicated only to backups and it works great. The VM runs on ZFS over SSDs.
For backup storage I have 4x 8TB enterprise HDDs but don' t know what's the best option of configuring them.

I tried the following:
- ZFS, but it's way too slow on HDDs
- mdadm RAID10 with LVM over and it's also very slow
- mdadm RAID10 with raw image file

After reading that it's not safe to run mdadm in PVE, I'm now thinking of passing the drives (not the controller, that's not possible) to the VM and doing the mdadm RAID inside the PBS machine. Would this be better for reliability and and speed?

I'm looking forward to any suggestions and recommendations.

Thanks!
 
Last edited:
Hello,
use ZFS with the following setup:

ZFS Mirror0 2x 8TB HDD - ZFS Mirror1 2x 8TB HDD - ZFS special device 2x SSD or NVME

Disable $ zfs atime=off <pool>, set the right Blocksize for your ZFS Pool.
Then you can use:
$ zfs set compression=on <pool>

And what do you think a ZFS pool can have read and write speed?
 
Last edited:
In the online handbook, its written, use SSD or NVME for the PBS pool.

i use ZFS mirror with 2x 1GB SSD Crucial MX500 in germany.
And on a other PC ZFS mirror 2x HDD 1TB - ZFS special device 2x SSD 512 GB (meta data)

and i set different recordsize for my pool and for my filessystems and volumes.
Default is 128 kByte.

With special_small_blocks = <size> you can set, that small files, with less than <size>, will also stored on the fast special device with SSD (NVMe).

It depends on your setup, you can change these values on every time.
 
Thank you very much for you quick and comprehensive response, but unfortunately I don’t have the option of adding other devices than the existing HDDs for the storage of backups.
 
I've passed the drives to the VM and did the mdadm RAID10 directly in PBS and so far it works best of everything I've previously tried.
 
Hello,
use ZFS with the following setup:

ZFS Mirror0 2x 8TB HDD - ZFS Mirror1 2x 8TB HDD - ZFS special device 2x SSD or NVME

Disable $ zfs atime=off <pool>, set the right Blocksize for your ZFS Pool.
Then you can use:
$ zfs set compression=on <pool>

And what do you think a ZFS pool can have read and write speed?
Note that PBS uses atime to keep track of chunks to cleanup during garbage collection https://pbs.proxmox.com/docs/backup-client.html#garbage-collection, so check if relatime is enabled instead of disabeling atime completely on the zfs datasets.

Edit: Updated in response to @Neobin's correction.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!