hardware recommendations for larger setup

phs

Renowned Member
Dec 3, 2015
38
3
73
Hi,

Im considering to build larger PBS with at least 30TB disk space,
i just wonder what kind of hardware is recommended for:
- 30TB storage with upgrade path to at least 100TB
- verify must perform extremely well ( max 12h for all data verify )
- restore should be able to saturate 10G link
- no exotic hardware
- at the same time it should be reasonable priced

any suggestions:

cheers
phil
 
Last edited:
Hi,

just some input, mostly regarding storage

- verify must perform extremely well ( max 12h for all data verify )
Means you want nothing spinning, i.e., you want SSDs. Being able to go through 30 TB/TiB of data in 12 hours means you need a sustained read-bandwidth of around 700 MiB/s, so SATA would be already on its limit - current SATA provides at max 6 Gbps = 750 MB/s = 732 MiB/s, and that's without overhead.

A ZFS 10 mirror may still work out as it can aggregate bandwidth of multiple SATA links a bit. There are some SSD series like the Micron ION series that go to 8 TB per drive, could be the cheapest option to barely get you there, but other vendors have such too important things would be to have power loss protection (capacitor) and being very durable.
An 8 drive ZFS/BTRFS RAID10 would probably get you to the performance required here.

But, if you plan to be able to restore and/or write new data to the PBS during that time and want to expand that in the future you'd probably better go for NVMe drives, for example U.2 ones, they're also available in bigger capacities, e.g., 15.36 TB for Micron 9300 PRO (but again, there are other vendors too, just naming an example).
You'd only need 4 drives to get a RAID10, even having only 3 (RAID-Z1) could be an option, albeit that may limit extensibility a bit more.

In general, do not safe too much on memory, that only starves the page cache and or the ARC (if ZFS is going to be used) and that would make it faster. Say 8 GiB base plus 1 GiB memory per 1 TB storage would be a good rule of thumb for a lower limit that won't be a bottleneck soon.

- restore should be able to saturate 10G link
10G could get saturated with a RAID 10 of enterprise SATA SSDs and could even be a bottleneck if you go for the U.2 ones, but adding a 40G card down the line, if it actually becomes an issue, can be possible.

- no exotic hardware
Well, U.2 is not like as common as SATA, but it's also not exotic anymore in the server space.

- at the same time it should be reasonable priced
Maybe it would help to give a budget estimate. I mean, the requirements you're stating do not sound like they are meant for some lightweight home lab use (albeit I know some crazy-well equipped home lab folks).
 
Hey phs,

nice project! :) My few thoughts:
Scaling up to 100TB using flash based storage (SSD, NVMe, ...) wouldn't be easy - and for sure very expensive.
Personally I would rather tend to an HDD based storage. For example buildung a (ZFS) RAID10 with 6x 10 TB SAS HDDs would give you a solid price-performance-ratio. Of course reading would not (yet) saturate your 10GBit NIC, but you will increase reading performance with every additional pair of HDDs.

Greets
Stephan
 
The problem is the IOPS with HDDs. A GC might take days to finish. L2ARC or special vdev might help with GC but doing a verify could still take an eternity. Especially with "max 12h for all data verify" for up to 100TB just isn'T possible with spinning rust.

But yes, that really sounds expensive. A 16TB Micron 9300 Pro is 3700€. If you want redundancy and performance you would need to buy 16 of them to get a 128TB striped mirror which would result in 102TB of usable capacity (because 20% of a ZFS pool should be kept free). So thats 60,000 € just for the SSDs. Raidz2 might save some SSDs but then you are limited to the IOPS performance of a single SSD and adding new SSDs later will only increase the capacity but won't give any additional performance.
 
Last edited:
Hey Dunuin,

you caught me :D because I have no experience with PBS yet, and so I don't know what is needed to have a fast GC and data verifying.
The "max 12h for all data verify" is related to the 30TB I think, but anyway you're right: With these performance requirements flash based storage is the way to go.

Greets
Stephan
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!