ZFS disk layout for new install

Shaun Hills

New Member
Oct 3, 2016
11
0
1
We're considering a new install of Proxmox VE, onto hardware that we already have.

The machine is an HP DL380 with 128GB RAM. It has three 1.8TB SAS disks, and five 280GB SAS disks. It has an HP P440 Smart Array hardware RAID controller, though we don't have to use this. The machine is currently in use. So I can't "just try it out" - I need to do a bit of planning.

We'd like to use ZFS for VM storage if possible, because of reliability and flexibility. We also want reliability for the Proxmox host OS, but we could do that with more traditional (e.g. hardware) RAID if necessary.

Constraints: we can't add new disks right now, though we might be able to in future. And we don't have easy access to the data centre - it can take a few weeks to arrange.

Is the following setup possible/advisable:
  • Put the RAID controller in (ideally) HBA or (otherwise) JBOD mode
  • Create ZFS pool /zfs with three mirrored vdevs: two 2x280GB and one 2x1.8TB
  • Keep one 1.8TB and one 280GB disk spare
  • Use ZFS pool as the boot volume, also as VM block storage.
I'm aware that different sized vdevs isn't ideal. In future, we might replace the 280GB disks and could increase the size of the vdevs then. We'd like to always keep at least one spare disk because of the time it takes to get access to the server.

Any comments welcomed :)
 
Are these "real" 1.8 TB SAS with 10k or just NL-SAS 7.2k? The 280 GB SAS as SSDs? I don't know about such a strange size. There are normally based on 300 GB / 450 GB multiples.

Don't mix devices because the performance gets unpredictable.
 
The 1.8TB disks are "real" (?) 10K disks. These to be exact. My email says the 280GB ones are part number EH0300JDYTH, which on the HP website is 300GB but I'm pretty sure the server reports it as 280GB (279GB actually). Shrug. They're all spinning disks; no SSDs. Over time we will probably replace the smaller disks with more 1.8TB units so we don't have mixed devices but we can't do that just now for boring logistical reasons.

I accept that mixing devices isn't ideal. But assuming we need to, at least for a while, does the proposed setup seem like a good way to do it?
 
The controller P440 can IMHO not operate in JBOD mode, so this is not optimal (one could also say bad). For best ZFS operation, you need JBOD and as less intelligence as possible between your OS and your disk.

Check if you can flash the firmware to one that supports JBOD or change the controller to e.g. a SAS2008-based one. One could create a single raid-0 volume over one disk, but it is not recommended, yet most of the time the only way to do this. Best is a stupid controller.
 
Good point. So I checked the HP documentation and I think it supports an HBA mode:

Screenshot_at_2016_10_05_11_45_16.png


If that's correct, we should be OK.
 

Attachments

  • Screenshot at 2016-10-05 11:45:16.png
    Screenshot at 2016-10-05 11:45:16.png
    38.2 KB · Views: 27
Last edited:
Not really. But the server will need to be rebuilt if/when we install Proxmox so I can find out then :) Worst case, I guess we can create a RAID 0 volume on each disk, but I'd rather not do that.

I'll post back, as that might be useful information.
 
Update: we didn't have much luck with the P440ar in HBA mode. You can put it in HBA mode just fine via HP Smart Storage Administrator, and Debian/Proxmox VE can see the disks during the install. But after a day or so of trying I couldn't work out how to get it to actually boot after the install. First of all we tried with ZFS, then with mdadm RAID1.

But it boots fine with the OS on hardware RAID1 so that's what we're doing now. Unfortunately HBA with the P440ar is all-or-nothing - you can put all disks in RAID mode or all disks in HBA mode. The ideal situation would probably be to use a controller that supports hardware RAID1 for the OS, and also direct hardware access for the storage disks, or possibly two controllers.

Having got the system up I'm re-thinking ZFS. We would need to use RAID0 for all the storage disks in the pool; this isn't ideal. Perhaps we would be better to create another couple of hardware RAID1 devices, assign them to a volume group and use LVM-thin across both devices.

Any thoughts on that? Or on LVM-thin vs ZFS in general?
 
Depending on the system, we use normal Pxxx for two local disks and an MSA60 for external ZFS on a IT-HBA-controller - or use only the IT-HBA for all disks with zfs and install onto this system directly. We replaced all ZFS on "intelligent" raid controllers with dumb ones.

Besides the great step forward with LVM-thin, I'd always prefer ZFS.
 
  • Like
Reactions: Shaun Hills
We were hoping to use only the HBA mode of the P440ar Smart Array controller for all disks. Unfortunately we couldn't get the server to boot. We were trying to boot off mdadm RAID1 (for redundancy) via UEFI - this isn't a particularly simple, common or well-tested configuration so maybe we did something wrong there, rather than anything being wrong with the controller.

For now we're using ZFS on top of hardware RAID0 volumes.

We might be upgrading the hardware next year. So I'll look at the possibility of keeping the P440ar for a RAID1 OS/boot volume but adding an HBA controller for the data disks. That sounds like it might be the easiest.
 
I also tried in the past with the HP adapters (older ones, P400, P410) to get such a setup working, but the RAID0-Volume stuff is not good for ZFS.

Some of my systems run a mdadm-mini partition (4 GB) for Proxmox VE an then the rest is on a zfs pool. That works well (but still not a officially supported system)
 
Have you ever done that mdadm setup with an HP adapter in HBA mode?

The Debian installer could even see the disks. It just wouldn't boot after the install had finished. Very frustrating :(
 
No, I thought that the HBA stuff was never working correctly for the Pxxx. I tried once and could not see it either. Therefore I needed to use the same raid0-volume trick you described, but never used it with ZFS. I just bought a IT-flashed "dumb" controller and never locked back. The thing costed below 100 bucks.
 
Hello! i have the same configuration (gen9, p440ar, proxmox 5) .
I have 2 SAS disks at my p440ar for proxmox (raid1) ext4

ZFS: 3 SATA 7200 disks at my onboard raid controller (ACHI connected) + SSD 256 for cache.
this works fast :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!