For 32 spinning hard drives in raid10 with zfs you need 384 gb of ram while with mdadm you need 40 gb. For pbs could I just make do with mdadm under xfs?
I think it isn't possible shrink from 384 gb to 64 gb for arc. Last my PBS has broken with uncorrectable data checks on datastore pool, (many chunk was lost) for a failed memory bank, yet it had zfs. I'm running a scratch install. I wase safe beacause i had a remote.of course, but u have no self healing, better is to tune zfs arc. u can limit the ram usage by zfs.
Fine !! Thanks, I'll look at the sizing guidelines.What about SSDs as special devices or a persistent L2ARC to store metadata? With fast SSDs that store the metadata you might not need that much RAM (will still be slower but by far not as bad as reading all metadata from slow HDDs).
For special devices keep in mind that they are not a cache. Losing them alao means losing all data on those HDDs. Special devices can't be rwmoved without destroying the pool and it will only store new metadata and not the metadata that is already stored on the HDDs.
With L2ARC this all wouldn't be a problem as it only caches the metadata stored on the HDDs.
But unlike a L2ARC those special devices would also boost write performance as those HDDs then don't need to read or write any metadata anymore.
With any of those you will see a massive performance improvement of GC tasks and listing the contents of a datastore as those are heavily metadata based workloads.
when u mean.I think it isn't possible shrink from 384 gb to 64 gb for arc.
I have a 34-bay chassis with 12TB hard drives and only 64GB of ECC RAM. I've seen that it's recommended for ZFS to have 1GB of RAM per TB of storage capacity unless SSD special devices are used.when u mean.
Where do u read that the RAM minimum depends on the special device? Its new to me that u can save RAM with sd.I've seen that it's recommended for ZFS to have 1GB of RAM per TB of storage capacity unless SSD special devices are used.
It's not that ZFS won't work with quit low RAM (but of cause, lower it too much and it will stop working entirely) but it may get really slow without the read caching of often accessed blocks (dnode/metadata/data/predictions and so on). Adding a L2ARC or special device won't lower the RAM requirements, it just makes it less worse with low RAM as stuff like metadata that can't fit in RAM will then be read from the faster SSD instead of the slower HDD. So performance impact when running low on RAM will be lower.Where do u read that the RAM minimum depends on the special device? Its new to me that u can save RAM with sd.
You're right, unfortunately, I have to use materials that I have at home, I can't make purchases: that's why I was leaning towards mdadmMore RAM is always better. With 64GB of RAM you are on the very low end when using 400TB of HDDs. With that much money spend on the HDDs it shouldn't hurt that much to add some more RAM. For the price of just a single HDD you could double your RAM resulting in better performance for all those other 34 disks. Thats money saved at the wrong place.
Where do u read that the RAM minimum depends on the special device? Its new to me that u can save RAM with sd.