Update on our PBS setups

Great that you use 3-way mirrors. I don't see alot people using them because of the costs, even if a raidz2/3 is slower or a 2-way mirror less reliable.
 
Last edited:
With large disks like these, the rebuild times are too long to depend on a single replica. Thus, you need three-way mirrors. Each machine also has a hot spare.
 
  • Like
Reactions: flames and Dunuin
Thank you for sharing. Speaking of performance (no complaints whatsoever!): can users help spread out the load for the machines by using or avoiding certain days or times for jobs?
 
No. Backing up goes fine, we hardly see any issues with that. We do reconfigure verificationjobs (which we feel we don't actually need) or pruning jobs if we feel that they cause too much load during peak backup times.
 
  • Like
Reactions: Dunuin
I also notice no zil and no l2arc, just ssd as special device for metadata.
I guess a SLOG won't make much sense as PBS is primirily using async writes? And a L2ARC only makes sense if you got files that are so big they won't fit in the way faster RAM (where you get less of when using L2ARC) but you need to access them alot. I guess thats not that useful for a big backup server that just stores 4MB chunks that nearly never get read again except by maintaince tasks. L2ARC for caching metadata would be useful, but not when you already store it on a special device.

Did you test if increasing the recordsize or enableing relatime shows a noticable difference?
 
Last edited:
  • Like
Reactions: tjk
What are your vdev configs? I also notice no zil and no l2arc, just ssd as special device for metadata.

Like the article says, stacked three-way mirrors. Indeed no ZIL or L2ARC as the dataset is too large for L2ARC and the writes are not synchonous enough for ZIL.

We have enabled relatime, which makes a real difference for GC & Pruning. GC & Pruning mainly hits the Nvme special vdevs, not so much the spinning disks.
 
  • Like
Reactions: Dunuin
Dear Tuxis crew,
regarding the special device SSD VDEVs - how full do these get? Did they fill up / overflow into the standard spinning rust VDEVs?

Thanks
 
If I remember right rule of thumb is about 0.3% of the size of your data. And in case a special device gets over 75% full metadata will spill over onto the HDDs so the pool still stays functonal.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!