Filesystem and disk array configuration recommendation

shalak

Member
May 9, 2021
44
0
11
38
Hello!

I bought a HP Proliant DL380e G8 server with 2xE5-2450L and 64GB of RAM. I have 4 x WD Red WD40EFAX 4TB HDDs and 2 x ADATA 240GB 2,5" SATA Ultimate SU630 SSDs. The server RAID battery has been refurbished.

Here are the RAID controllers on this machine:

Code:
[12:08:36][root@zoltan]:~# lspci -knn | grep -A 1 'RAID bus controller'
00:1f.2 RAID bus controller [0104]: Intel Corporation C600/X79 series chipset SATA RAID Controller [8086:1d04] (rev 05)
    Subsystem: Hewlett Packard Enterprise C600/X79 series chipset SATA RAID Controller [1590:0048]
--
0a:00.0 RAID bus controller [0104]: Hewlett-Packard Company Smart Array Gen8 Controllers [103c:323b] (rev 01)
    Subsystem: Hewlett-Packard Company P420 [103c:3351]

The goal of the server is to be a homelab and a NAS: it will run mainly nextcloud and openmediavault (with multiple docker containers, e.g. reverse proxy, dns sinkhole and game servers like Factorio, Valheim or Minecraft).

Currently I installed Proxmox on those 2 SSDs, which are configured as RAID-1, the 4TB disks are waiting in line.

My idea was to utilize the SSDs for stuff that needs fast access and HDDs for media, backups etc.

For those 4TB HDDs - should I go with hardware RAID + LVM or straight with ZFS? I value being hardware- and vendor-independent, so I'm leaning towards ZFS. However (AFAIK) the LEDs on my machine won't serve their purpose then and I don't know how much will the performance go down...

What would be the best recommendation to configure the disks?
 
Personally I go "ZFS only!" for a long time now.

You might decide different as your "EFAX"-disks probybly use SMR ("shingled...") and may introduce problems if used with ZFS. (At least they did in the past.)

My recommendation: sell/replace those EFAX-disks and get happy using ZFS...

Best regards
 
Personally I go "ZFS only!" for a long time now.

You might decide different as your "EFAX"-disks probybly use SMR ("shingled...") and may introduce problems if used with ZFS. (At least they did in the past.)

My recommendation: sell/replace those EFAX-disks and get happy using ZFS...

Best regards
This is really bad news for me as I don't have the means to replace them.

So you say that there's no way to make them work with ZFS, e.g. via firmware update? If I try to setup a ZFS on them and there will be no error, would that mean that I'm good?

I tried looking up this info online, but all the sources are very vague on the topic (to the point that I don't even know for 100% if my disks are SMR or not)
 
Last edited:
So you say that there's no way to make them work with ZFS, e.g. via firmware update? If I try to setup a ZFS on them and there will be no error, would that mean that I'm good?
Well, I am not sure if the condition got better with newer ZFS versions (meanwhile Version 8.x --> 2.x has happended).
When those SMR drives were new and unknown they produced "not reproducible" errors "without reason" when used with ZFS.
You need to check that for yourself, DuckDuckGo is your friend...

Good luck
 
Well, I am not sure if the condition got better with newer ZFS versions (meanwhile Version 8.x --> 2.x has happended).
When those SMR drives were new and unknown they produced "not reproducible" errors "without reason" when used with ZFS.
You need to check that for yourself, DuckDuckGo is your friend...

Good luck

Thank you for your hints. And thanks to the courtesy of the seller, I will be able to return those HDDs :)

What 4TB drives would you suggest to run in my setup?
 
What 4TB drives would you suggest to run in my setup?
Those classic "WD Red" were re-labeled "WD Red Pro" after that EFAX-chaos. They are fine. Alternatively I do use some Seagate Ironwolf. This is for NAS. For backing virtual machines (redundant) SSDs are a much better choice - with different problems like "wearouts".

Please note that I am just a home user, not someone with thousands of disks in use...

Best regards
 
Those classic "WD Red" were re-labeled "WD Red Pro" after that EFAX-chaos. They are fine. Alternatively I do use some Seagate Ironwolf. This is for NAS. For backing virtual machines (redundant) SSDs are a much better choice - with different problems like "wearouts".

Please note that I am just a home user, not someone with thousands of disks in use...

Best regards
Hi! Thanks for all the notes. I hate to ask, but I hope you might be able to help me out with the necessary BIOS prep that I need to make this work.

I went with Seagate after all. And I bought 6 HDDs this time (I'm planning to have RAID-Z2 on them). However, I'm having trouble making the new drives visible in the operating system. I was able to find them only in the RAID configuration panel, when trying to create a new RAID volume:

signal-2021-06-26-200323.jpeg

If I understand correctly, I need to pass those drives natively to the OS, right? How do I do it?

NOTE: I have RAID1 array on this controller already (those two SSDs I mentioned before, Proxmox is running on those).
 
If that is a raid controller you most likely can't use ZFS. Maybe it is possible to switch the raid controller into IT/HBA mode but then all ports would be without raid and your raid1 wouldn't work anymore.
 
If that is a raid controller you most likely can't use ZFS. Maybe it is possible to switch the raid controller into IT/HBA mode but then all ports would be without raid and your raid1 wouldn't work anymore.
Huh, I guess I'm forced to go with native RAID then, instead of ZFS :( What would I need to have ZFS possible? The BIOS reports that I have B120i as well (however I guess I'd need to reconnect some SATA cables inside?) What should I get to have those drives natively hosted to OS, like on regular desktop PC? Just PCI SATA controller with at least 6 sockets?
 
Yes, just a normal sata controller without any raid features. But make sure the controller chipset is supported by the 5.4 linux kernel.
 
Yes, just a normal sata controller without any raid features. But make sure the controller chipset is supported by the 5.4 linux kernel.

Ok, I found some documentation on how to enable HBA for P420i for firmware 8.0+, as well as how to bypass the lack of boot in HBA mode by placing /boot on SD card. That means, as you mentioned, that my RAID1 will need to go. Fortunately, I don't mind reinstalling OS there, so now I wait for SD card to ship.

But this overhaul opens a possibility to better use the two SSD drives I have. I was thinking to have 2 x SSD in RAID1 and the rest of the drives in RAID-Z2. Maybe it would be better way to utilize them? (like use them for some ZFS cache?)

What configuration do you think would be the best?
 
Ok, I found some documentation on how to enable HBA for P420i for firmware 8.0+, as well as how to bypass the lack of boot in HBA mode by placing /boot on SD card. That means, as you mentioned, that my RAID1 will need to go. Fortunately, I don't mind reinstalling OS there, so now I wait for SD card to ship.
Booting from USB cards (so just the boot partition) should be fine. But your root partition shouldn't be on a SD card. Proxmox is writing way to much (around 30GB per day just for logs/metrics/configs) to be installed on a normal consumer SD card or USB stick so it could fail very fast.
So I see 3 options:
1.) buy some decent industrial SD card or industrial usb stick that can handle the writes and do a normal PVE ISO installation
2.) do a normal PVE ISO installation and try to minimize the writes to the SD card afterwards. There are some forum threads discussing on how to minimize it by using RAM disks and so on.
3.) install a Debian Buster. Make sure to use the SD card only as boot + efi/grub partition and then create a mdraid raid1 on the both SSDs and use it as your root partition. Install PVE ontop of that Debian, partition the remaining space of the SSD and use them (ZFS mirror or mdraid raid1 with LVM ontop) as your VM storage.
But this overhaul opens a possibility to better use the two SSD drives I have. I was thinking to have 2 x SSD in RAID1 and the rest of the drives in RAID-Z2. Maybe it would be better way to utilize them? (like use them for some ZFS cache?)

What configuration do you think would be the best?
Cache SSDs are in most of the cases not really worth it. HDDs in raidz2 are really bad as a VM storage and adding cache SSDs wouldn't make it much better. Its way better to use a pair of mirrored SDDs as your VM storage and for all the hot storage stuff.
 
Booting from USB cards (so just the boot partition) should be fine. But your root partition shouldn't be on a SD card. Proxmox is writing way to much (around 30GB per day just for logs/metrics/configs) to be installed on a normal consumer SD card or USB stick so it could fail very fast.
So I see 3 options:
1.) buy some decent industrial SD card or industrial usb stick that can handle the writes and do a normal PVE ISO installation
2.) do a normal PVE ISO installation and try to minimize the writes to the SD card afterwards. There are some forum threads discussing on how to minimize it by using RAM disks and so on.
3.) install a Debian Buster. Make sure to use the SD card only as boot + efi/grub partition and then create a mdraid raid1 on the both SSDs and use it as your root partition. Install PVE ontop of that Debian, partition the remaining space of the SSD and use them (ZFS mirror or mdraid raid1 with LVM ontop) as your VM storage.

Cache SSDs are in most of the cases not really worth it. HDDs in raidz2 are really bad as a VM storage and adding cache SSDs wouldn't make it much better. Its way better to use a pair of mirrored SDDs as your VM storage and for all the hot storage stuff.

Yep, option 3 seems to be way to go. Thank you for all the input! I'll go with mirroring SSDs in ZFS, installing proxmox on them and using the same volume for most crucial VMs.
 
I'll go with mirroring SSDs in ZFS, installing proxmox on them
That can be hard because Debian isn't supporting ZFS out of the box so the Debian installer shouldn't be able to show it as an option for your root partition. But maybe it is possible to install it without ZFS and later move the root partition to a ZFS pool.
 
That can be hard because Debian isn't supporting ZFS out of the box so the Debian installer shouldn't be able to show it as an option for your root partition. But maybe it is possible to install it without ZFS and later move the root partition to a ZFS pool.

This tutorial claims, it's just a matter of installing the ZFS on the live debian instance (via "apt install zfsutils-linux")
 
@Dunuin I hope you don't mind me nagging you again :)

I found some info claiming that it's a bad idea to run ZFS on P420 in HBA mode. Now I'm wondering if I should go with H220 or H240. Which one would be better? Will I still need to boot from SD in that configuration?
 
You need a HBA in IT (initiator target) Mode so the host can directly access the drives without any abstraction in between. Not sure what PERC can do that.
 
  • Like
Reactions: shalak

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!