PBS with 32 volumes in raid10

of course, but u have no self healing, better is to tune zfs arc. u can limit the ram usage by zfs.
 
of course, but u have no self healing, better is to tune zfs arc. u can limit the ram usage by zfs.
I think it isn't possible shrink from 384 gb to 64 gb for arc. Last my PBS has broken with uncorrectable data checks on datastore pool, (many chunk was lost) for a failed memory bank, yet it had zfs. I'm running a scratch install. I wase safe beacause i had a remote.

On the other hand, with the remote I have some io/wait in production because I have spinning hard drive in raid5. I'm coming out of the tunnel.
 
Last edited:
What about SSDs as special devices or a persistent L2ARC to store metadata? With fast SSDs that store the metadata you might not need that much RAM (will still be slower but by far not as bad as reading all metadata from slow HDDs).
 
  • Like
Reactions: Leprelnx
What about SSDs as special devices or a persistent L2ARC to store metadata? With fast SSDs that store the metadata you might not need that much RAM (will still be slower but by far not as bad as reading all metadata from slow HDDs).
Fine !! Thanks, I'll look at the sizing guidelines.
 
For special devices keep in mind that they are not a cache. Losing them also means losing all data on those HDDs. Special devices can't be removed without destroying the pool and it will only store new metadata and not the metadata that is already stored on the HDDs.

With L2ARC this all wouldn't be a problem as it only caches the metadata stored on the HDDs.

But unlike a L2ARC those special devices would also boost write performance as those HDDs then don't need to read or write any metadata anymore.

With any of those you will see a massive performance improvement of GC tasks and listing the contents of a datastore as those are heavily metadata based workloads.
 
Last edited:
  • Like
Reactions: Leprelnx
For special devices keep in mind that they are not a cache. Losing them alao means losing all data on those HDDs. Special devices can't be rwmoved without destroying the pool and it will only store new metadata and not the metadata that is already stored on the HDDs.

With L2ARC this all wouldn't be a problem as it only caches the metadata stored on the HDDs.

But unlike a L2ARC those special devices would also boost write performance as those HDDs then don't need to read or write any metadata anymore.

With any of those you will see a massive performance improvement of GC tasks and listing the contents of a datastore as those are heavily metadata based workloads.

Great !! I'll See if a 2tb ssd mirror is enough with 64gb of ecc ram.

Thanks, best regards
 
More RAM is always better. With 64GB of RAM you are on the very low end when using 400TB of HDDs. With that much money spend on the HDDs it shouldn't hurt that much to add some more RAM. For the price of just a single HDD you could double your RAM resulting in better performance for all those other 34 disks. Thats money saved at the wrong place.
 
Last edited:
  • Like
Reactions: Leprelnx and Neobin
I've seen that it's recommended for ZFS to have 1GB of RAM per TB of storage capacity unless SSD special devices are used.
Where do u read that the RAM minimum depends on the special device? Its new to me that u can save RAM with sd.
 
Where do u read that the RAM minimum depends on the special device? Its new to me that u can save RAM with sd.
It's not that ZFS won't work with quit low RAM (but of cause, lower it too much and it will stop working entirely) but it may get really slow without the read caching of often accessed blocks (dnode/metadata/data/predictions and so on). Adding a L2ARC or special device won't lower the RAM requirements, it just makes it less worse with low RAM as stuff like metadata that can't fit in RAM will then be read from the faster SSD instead of the slower HDD. So performance impact when running low on RAM will be lower.
 
Last edited:
More RAM is always better. With 64GB of RAM you are on the very low end when using 400TB of HDDs. With that much money spend on the HDDs it shouldn't hurt that much to add some more RAM. For the price of just a single HDD you could double your RAM resulting in better performance for all those other 34 disks. Thats money saved at the wrong place.
You're right, unfortunately, I have to use materials that I have at home, I can't make purchases: that's why I was leaning towards mdadm
 
Last edited:
So with PBS u dont need read or write cache for data because u mostly write linear to the storage. I have calculated the size for the metadata when u use a Raid60 with 34 x 12TB disks and a recordsize of 128. I was shocked but u need a little bit over 1 TB alone for metadata. Then the best is really to add a special device and configure the arc with "zfs set primarycache=none tank/datab" (example). This is the best compromise between RAM usage and performance.
Raid10: metadata=600GB
 
Last edited:
PBS most definitely is not writing linearly to storage..

the bulk of its I/O is
- R/W on chunks, which are usually 1-4MB in size (1 chunk == 1 file), but there is no correlation on-disk between the chunks making up a single backup snapshot
- metadata access, both in the sense of PBS metadata (smaller files/indices) and actual FS metadata (random I/O again!)
 
  • Like
Reactions: Leprelnx
Hi all,
I had to resign myself to using mdadm. I had to quickly make consistent backups and I didn't have the necessary ram: I realized that the backups I had on the remote pbs had corrupted chunks, it's imported with sync from the previous broken pbs , I think, and that they weren't replaced as they were present in the indexes.

Thank you all

--
Luca
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!