Best setup for 4xSSD RAID10

speedbird

Member
Nov 3, 2017
45
5
13
Hello community :)

I currently plan on using proxmox to build my new virtualization environment. Before I actually rent the server for my project, I want to ask you about the storage possibilities I've got.

What I'll order:
4x 480GB DC SSD

So question is:
What's the best way to build this storage solution? Note that these drives also have to hold proxmox as well since I can only attach 4 drives in total.

I could do this:
· Order a HW RAID controller and make this RAID10 by hardware. Because I want snapshots, I would have to choose LVM. What's the best approach to make this work?

· Discard the idea of HW RAID and instead use ZFS to build a ZFS RAID10. However, I don't know how that would actually work here since I can't use the ISO installer and have to install Stretch first. Since I already need partitions for debian then, I could only create separate partitions of the same size for my ZFS pool then, wasting space because if I dedicate like 50G to proxmox already on the first drive, I can only use 430GB on the other drives as well as partitions have to be of equal size, wasting 150G in total PLUS my proxmox installation isn't redundant at all anymore since it sits on just one drive outside the RAID10 pool.

To be honest, my experience with ZFS is nil but I keep hearing good things in terms of reliability and flexibility so I'd like to give it a try. However as an oldschool RAID card type of gui, I kind of fear that since ZFS is software only, the next proxmox update could easily kill the whole RAID10, making everything on it inaccessible. Of course I'll do backups but it just adds a lot of downtime and work to everything.

Any way... I know, the proxmox installer ISO does a good job in making all the settings I need BUT I can't use it since I don't have any kind of IPMI or iDRAC interface to boot from iso.

Although Hetzner does provide LARA, some kind of KVM over IP console, I'm not sure I could use this to use the installer. Does anybody have experience here?

However, even if I could - which I don't know - the question remains of what would be the most robust und reliable storage solution with my given configuration. It has to work. Period. I/O performance and fast response is absolutely key for me.

Hopefully you guys can point me in the right direction. Thanks!

// EDIT //

Also, there's a thread out here, where there's people having like 3sec packet drops and everything with Proxmox5 under high load – ex. during backups. Is this problem still around and would you guys recommend to use Version 4.4 instead? I usually don't like using older software but since I need the system to be reliable and stable I'm curios about that.
 

speedbird

Member
Nov 3, 2017
45
5
13
The primary use will be a webserver installation with ± 10GB of databases and ± 40 - 50 domains.
 

otbutz

Member
Oct 17, 2017
13
3
23
31
How much data do you plan per domain because 960GB would yield ~15GB. I'd go for two smaller SSDs using mdadm for the OS and two 480GB SSDs with ZFS RAID1. And don't forget about ZFS compression feature which will boost your usable storage capacity even more.
 

aderumier

Active Member
May 14, 2013
206
18
38
if you don't need zfs feature like replication to another server, I'll go to hardware raid10 (without cache) + lvm-thin for snapshots.
 

otbutz

Member
Oct 17, 2017
13
3
23
31
if you don't need zfs feature like replication to another server, I'll go to hardware raid10 (without cache) + lvm-thin for snapshots.
The performance of LVM with snapshots tends not to be the best.
 

speedbird

Member
Nov 3, 2017
45
5
13
Well, thanks so far. There will be 2-3 other VMs so not all the storage will be used for the webserver only.

So with only 4 drives, having to hold proxmox itself AND all data, I still don't know how to create a working zfs raid10. Is it even possible to do so when I can't use the iso installer?

Also, how much of a speed impact will the LVM snapshot capability bring up?

Thanks guys.

Oh and btw... 4.4 or 5.1 is also still the question. I need a robust setup with good performance. Is 5.1 already usable? Because "countless bugfixes" in the changelog doesn't sound too promising :)
 

LnxBil

Famous Member
Feb 21, 2015
5,621
651
133
Germany
I install every system I need to run off-site in PVE itself on the actual disks virtualized (4x480 GB virtualized in your case), then boot the remote server via live system (e.g. hetzner) and copy all disks from my virtual system to the off-site-system and boot it up. This way you can install, configure and tune everything in your own network and just copy everything up.
 

speedbird

Member
Nov 3, 2017
45
5
13
Thanks for your input but I already had to make my move. Sadly I couldn't get ZFS stable anyway, so I went with mdadm RAID10 and LVM which so far works great. Yeah, the downside is not having ZFS compression but on the other side the system is stable and doesn't run out of memory when benching I/O performance.

I hope within the next few years ZFS becomes easier to handle for someone with little knowledge about it like me but I just couldn't find a setting where ZFS didn't eat up all system memory until the whole system crashed even though I limited it to 8GB on a 32GB system. It still ate up everything when running Atto Bench.

Since I didn't have the time for weeks of try and error I had to go the other way anyway. As I said, I hope for the future. Maybe the proxmox guys are able to some day introduce a foolproof implementation for guys like me.
 

LnxBil

Famous Member
Feb 21, 2015
5,621
651
133
Germany
That is sad to hear. Only as a reminder: You're running an unsupported setup right know :-/

Yes, ZFS needs ram yet it should work without a crash. I ran ZFS on a Raspberry PI 1 until the card died and it worked. It was hellish slow, yet it worked.
 

speedbird

Member
Nov 3, 2017
45
5
13
I know, but it works charmingly. Great performance and stability, no worries about RAM usage of the filesystem itself.

I'll give ZFS another try maybe with the next proxmox version on a new server but for now it has to work like this.
 

LnxBil

Famous Member
Feb 21, 2015
5,621
651
133
Germany
Yes, sure. I also tried some time before converting everything (possible) to ZFS. I still have crashes and OOM stuff from time to time, yet only in cornercases with e.g. pools with approx. 90% fullness etc. Besides that ZFS is also rock solid in the last year or so. It had some problems before and it's getting better and better.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!