Best practice configuration?

hidalgo

Well-Known Member
Nov 11, 2016
60
0
46
57
I got a Supermicro server with hardware RAID, 2 small SAS disks and a bunch of bigger SATA disks. Till now I played around with different setups because I’m new to Proxmox.
Now it’s time to get ready for the final setup. What shoud I do?
I’d like to have SAS mirrored for boot and system and the SATAs for data. My last setting ignores the SATAs and has a zpool on my mirrored SAS disks. Should I create a second zpool with my SATAs or should I expand the existing zpool with the new disks? Pros and cons?
 
You won't be happy with different disks types (SAS vs. SATA) in ZFS. It technically works, but speed will be inconsistent.

It is really hard to give advice without knowing how beefy your system is. ZFS needs a lot of RAM to be fast. If this is an older system with e.g. battery-backed-up hardware RAID controller with great RAID5 speed and you only have e.g. 32 GB of RAM, you'd better of using the hardware raid instead of ZFS in terms of speed. If speed is not your primary goal, I'd always go with ZFS because of the great features.

I'm running a lot of boxes with hardware raid1 on two internal 146GB SAS drives (standard server "disk ammunition") and have my inexpensive SATA disks attached and only them in a ZFS pool. You'll lose a little bit of RAM because of the caching of two filesystems (normally ext4 on the raid and ARC for ZFS), but it's fine.
 
Ok. Thank you. So I’m going to create a second zpool with my SATAs. But what sould I do with my pool on the SAS drives the install process created it while setup? Keep it or destroy it?
 
Is this a ZFS pool on a hardware raid or have you setup all disks in IT/JBOD mode? If it's not on hardware raid, you can keep it, otherwise it's riskier and not a good setup. ZFS has to be used on plain disks, not hardware raid. It technically works, but you'll loose all the integrity features of ZFS.
 
Yes I set up all disks as JBOD. While installing Proxmox I chose only the 2 SAS disks als ZFS mirror and Proxmox created the boot volume and the rpool. That’s why I’m asking. Should I expand this pool with additional disks (SATA) or should I create a new zpool. When the latter what should I do with the rpool?
 
As already said, it is technically possible to expand the pool, but it will not be at optimal speed. I suppose the SAS drives as are 10k and the sata 7.2k? The SATA are also not enterprise grade disks?
 
Ok. Thank you. So I will create a new pool with the (I think) enterprise grade SATAs. An issue is still unclear: what to do with the pool on the SAS disks? should I keep it or should I destroy it?
 
side question: anyone using a mirrored zfs on 3 disks? I had (on two different servers with different disks but same hardware controller) simultaneous failures on both disks at the same LBA (pretty strange) and i really hate any raid with just 2 disks

With mdadm my standard raid1 is with 3 disks. Is this possible with ZFS?

Is zfs smart enough to detect the proper raid configuration when disks are moved between servers and non inserted in the same order? Obviously this is not an issue for mirrored raid but what about raidz1 or raidz2?

Any best practise (or supported configuration) for raid in proxmox?

I'm planning to buy a couple of supermicro with 8 sas disks and 2 sata. Both sata would be used as zfs zil/l2arc and the 8 sas disks are used to created 2 mirrored volumes with 3 disks each. (3 from the start, the other 3 when needed by extending the volume)

is this ok?

What if i have to move disks to a different machine without moving also the cache? Is this possible?
 
Currently i have all servers with hardware raid6 with 6x300GB.

usable space=1200GB and i can resist to up to 2 failed disks

By using 6x600GB in 2xRAID1 I'll have the same usable space but with better performance and i can loose up to 4 disks (2 for each mirror) and much better recovery time, right?
 
With mdadm my standard raid1 is with 3 disks. Is this possible with ZFS?
Yes, this is possible for a write performance penalty. But you get increased read performance since ZFS will distribute read among all three disks.
Currently i have all servers with hardware raid6 with 6x300GB.

usable space=1200GB and i can resist to up to 2 failed disks

By using 6x600GB in 2xRAID1 I'll have the same usable space but with better performance and i can loose up to 4 disks (2 for each mirror) and much better recovery time, right?
what your a doing in ZFS slang is a stripped mirror, aka RAID10, of two three-disk RAID1.
 
Yes. Usually i use lvm to aggravate multiple raid1 volumes. I think they are safer and more flexible than a raid10

What about my thinking? Is nonsense to replace a raid6 with a "raid10" with 3 disks mirrored?
 
I'm also open to alternatives like full ssd storage but with 2 mirror with 3 disks cost would be too high to get neat 1000gb of usable space and i don't feel safe with just 2 mirrored disks
 
By using 6x600GB in 2xRAID1 I'll have the same usable space but with better performance and i can loose up to 4 disks (2 for each mirror) and much better recovery time, right?
Your math it wrong. With a 6 disk RAID6 or RAIDz2 you have 4 disks for data and 2 disks for parity. with 2 3-disk RAID1 in RAID10 you will have 2 disks for data and 4 disks for redundancy.
 
Exactly, 2 disks for data are 1200gb (by using 600gn disk)

With raidz2 and 300gb disks, 4 disks for data are the same total space: 4x300gb=1200gb but low performance and low resiliency
 
Exactly, 2 disks for data are 1200gb (by using 600gn disk)

With raidz2 and 300gb disks, 4 disks for data are the same total space: 4x300gb=1200gb but low performance and low resiliency
That is no proof? Given the same size of disks in raidz2 you would have 2400gb which proof my statement that you half your effective storage.
 
Given the same size, yes
But I'm talking about using a double size for raid10

By moving my current storage to a raid10 and doubling the disk sizes I'll get the same usable space, better performance and better reliability to failures

Isn't it?
 
Given the same size, yes
But I'm talking about using a double size for raid10

By moving my current storage to a raid10 and doubling the disk sizes I'll get the same usable space, better performance and better reliability to failures

Isn't it?
Yes, in your specific case but I was talking in general terms;-)
My advice to you, given your requirements, would be to make an all SSD raidz2 with 6x300gb Intel DC S3500. There is a very good offer on Amazon for these disks: https://www.amazon.com/Intel-Solid-State-Drive-S3500-SSDSC2BB160G401/dp/B00CT98E3K?th=1
 
A raidz2 on ssd would eat ssd very fast due to write penalty. If i remember properly, for every write, a raid6 has to write it 3 times more

What about recovery time? how long does it take to resilver a full ssd raid6 like the one wrote above?
 
Forgot another important factor when choosing between raidz versus raid1/raid10 which is the kind of CPU used in the storage box. Since calculating parity is a CPU intensive task then raidz favor higher GHz and more cores so forget about atom based solutions for raidz.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!