Storage: Which way to go?

BEHIND-IT

Member
Nov 4, 2017
4
0
6
37
Rödermark
Happy New Year,
even though it's been a while. I hope you all had a good start?

I'm currently thinking about a new (used) storage solution, which will be used for our 4 node Proxmox Cluster as storage backend for about 15 VMs. So that then also machines can be migrated between the hosts without copying the disks.

We actually wanted to go for ARM based microservers and CEPH, but unfortunately these microservers are not as well suited as we had hoped.
Therefore they have to go now. For this reason we had already bought 18 * 3.5" WD Black 2 TB LFF SATA HDDs and 3 x 250 GB SSDs, which we would like to include in the new solution for economic reasons.

I am now considering which of the following solutions is the better one in terms of data security and performance:

1. 2 x DELL R720 with H310/H710 in IT mode and FreeNAS + ZFS

SW RAID 1 on 2 x 146 GB SAS for OS
RAID-Z2 for 9 x WD Black 2 TB approx. 12.7 TB

ZFS Replication between the storages
iSCSI connection to the 4 Proxmox nodes
2 x 10 GbE LACP over two cards for storage to Proxmox, 1 sync between the storages and 1 reserve

2 x Intel Xeon E5-2630L V2 - 6-Core 2.40Ghz (15MB Cache, 7.20GTs, 60W)
2 x Dell R720, R720XD - heat sink
1 x Dell PowerEdge R720, R720XD Fan
24 x 16GB - DDR3L 1600MHz (PC3L-12800R, 2RX4, ECC REG)
1 x H310 MiniMono (SAS/SATA) RAID Kit - 0/1/5/10/50/Non-RAID
12 x Dell 3.5" (LFF) Hot-Swap Caddy
2 x Dell PowerEdge 11G SFF to LFF Converter Caddy
1 x 1GbE (Quad Port) RJ45 Ethernet - Dell I350
2 x 10GbE (Dual Port) SFP NIC - Intel X520-DA2
2 x Dell PowerEdge 'Platinum' Hot-Swap PSU 750W
1 x Dell B6 R520, R530, R720 Ready Rail Kit

or

2. 2 x SuperMicro CSE-826 X9DRH-7TF 12x 3.5" (LFF) and FreeNAS + ZFS

similar equipped as the DELL except LSI 9211-8i (to my knowledge this is the basis of the H310) in IT mode

or

3. 1 x SuperMicro CSE-846 X9DRi-F (4U) 24x 3.5" (LFF) and FreeNAS + ZFS

SW RAID 1 on 2 x 146 GB SAS for OS
RAID-Z2 for 18 x WD Black 2 TB 29.1 TB

iSCSI connection to the 4 Proxmox nodes
2 x 10 GbE LACP over two cards for storage to Proxmox, 1 sync between the storages and 1 reserve

2 x Intel Xeon E5-2630L V2 - 6-Core 2.40Ghz (15MB Cache, 7.20GTs, 60W)
2 x SuperMicro CSE-826, CSE-835, CSE-846, CSE-848 - X9DR Boards 2U Heatsink
16 x 32GB - DDR3 1866MHz (PC3-14900L, 4RX4, ECC)
1 x Adaptec 71605 1GB (SAS/SATA) RAID Kit - 0/1/5/6/10/50/60/HBA mode
24 x Supermicro LFF Hot-Swap Caddy
1 x SuperMicro CSE-826, CSE-846 2U, 4U Inner Rails
2 x SuperMicro (PWS-1K21P-1R) Hot-Swap 'Gold' PSU 1200W

Do you think these are viable ways or how would you solve it from your experience?
I'm interested in your opinion / thoughts.

Many thanks in advance & best regards
Oliver
 
Hi,


I would not go on this route :)

Any remote storage like iscsi is not so fast compared with local storage. Insted I would do like this (let go simple and guess that you have 4 VM/pmx host)

- I would buy only 2 server and I would add them in the existent cluster (4 old + 2 new)
- Then I would setup a replication in PMX for each VM to this new servers (8 VM to 1st new server and the rest of 8 VM to the 2nd server), in the worst case at 1 minute
- and of course you cand add all this VM in HA if you need


What you will get:

- live migration will take maybe few minutes (from command line, see with localdisk option)
In case of a broken node, you can run your 4 VM on this 2 new server - with in worth case without the last minute data(so you can lose some data).

So if it is acceptable to lose the last minute data, this solution is better.

RAID-Z2 for 18 x WD Black 2 TB

This will not be ok. I would not make any raidz2 with more then 10-12 HDDs!

Good luck / Bafta !
 
  • Like
Reactions: BEHIND-IT
Hi guletz,

thank you for your assessment, it has a certain charm to have only one solution.

I thought it was a problem if the hosts in the cluster are not exactly the same, so I didn't think about it further?

The storage would be different, maybe even the RAM, depending on the ZFS requirements.

Is there any guidance on how to properly create a ZFS volume for the VMs (best practices)?

- How much RAM for the cache per TB?
- From how much IOPS on do you need L2ARC / ZIL?

I'll go through these, they are already known to me:

https://pve.proxmox.com/wiki/ZFS_on_Linux
https://pve.proxmox.com/wiki/Storage:_ZFS
http://forums.freenas.org/threads/slideshow-explaining-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Many thanks in advance!

All the best,
Oliver
 
Hi Oliver


I thought it was a problem if the hosts in the cluster are not exactly the same, so I didn't think about it further?


No .... I do run some very "funny" hosts: mixed normal PCs, with dedicated server like Dell, or even normal PCs only(Amd Ryzen, AMD FX and Intel)

- How much RAM for the cache per TB?

As much as you can !! I never think that a such rule is usefull in real life! Also cache ARC can be tunned in many cases(all data, only metadata, and so on). For some tasks you will not want to cache data(like for backup rsync). At minimum around 1 GB RAM/TB(more/less).

- From how much IOPS on do you need L2ARC / ZIL?

ZIL is very usefull if you have applications like DataBases, or a NFS client/server(Direct IO)

Good luck /Bafta
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!