Dell R810, PERC H700, ZFS (for replication)

Paul Murdock

New Member
Apr 8, 2019
8
1
3
45
G'day,

I've got a Dell PowerEdge R810 server that came with 6 x 146GB SAS drives connected to a PERC H700 RAID card. Now I've got it working in RAID-6 mode (RAID card handles it all).. However, I would very much like to utilize ZFS snapshots. However, I'm trying to see if there are any other benefits that I could gain from ZFS or is it really just the snapshots. From my understanding and research there's a few things that have to change.

1) DELL PERC H700 cannot do IT mode, and just pass through the SAS drives to the OS. So the card will have to be disabled. (there's no FW flash to make it IT mode)

2) I have ordered an LSI-9211-i8 PCIx card to act as the HBA for these 6 SAS drives. I will have to remove the PERC H700 card and let the LSI-9211 connect to the hard drives.. (I will loose the ability to control the LED's on the front of the case and each caddy as a result of this, any way around this?)

3) Not havin setup a ZFS pool, yet. How does one partition it to hold the OS AND then the VM's? I'm not sure I fully understand how I would setup this ZFS pool.. I would like to setup 1 RAIDZ2 ZFS pool from the 6 hard drives. Does one create the pool, and then the OS gets its own partition, and the VM/Containers get another partition on this same pool? It seems like pools are the partition and a pool really can't be subdivided further down? Is that a correction assessment? If that is the case, then will I have to sacrifice 1 of the hard drvies as a local drive solely for OS install, then create a 5 disk ZFS RAIDZ2 pool for all the VM's?

cheers!
Paul
 
However, I'm trying to see if there are any other benefits that I could gain from ZFS or is it really just the snapshots.
Self-healing, bitrot detection and correction, compression
DELL PERC H700 cannot do IT mode, and just pass through the SAS drives to the OS. So the card will have to be disabled. (there's no FW flash to make it IT mode)
If you can't flash the card in IT-Mode you have to replace it. JBOD can't be used.
I will loose the ability to control the LED's on the front of the case and each caddy as a result of this, any way around this?
You can use tools like ledmon

Does one create the pool, and then the OS gets its own partition, and the VM/Containers get another partition on this same pool?
ZFS is a dynamic FS and Logical Disk manager.
ZFS has only 1 Data partition and uses datasets for what you called partition.
So the rootfs has its own dataset and so every guest.
If that is the case, then will I have to sacrifice 1 of the hard drvies as a local drive solely for OS install, then create a 5 disk ZFS RAIDZ2 pool for all the VM's?
You can do both. splitting the data from the Rootfs brigs more flexibility.
 
2) I have ordered an LSI-9211-i8 PCIx card to act as the HBA for these 6 SAS drives. I will have to remove the PERC H700 card and let the LSI-9211 connect to the hard drives.. (I will loose the ability to control the LED's on the front of the case and each caddy as a result of this, any way around this?)
Dell H310 can do jbod (passthrough mode) and will fully support your hardware.
 
Just as an update,

I received my LSI 9211-8i and flashed it in IT mode.. removed the Dell PERC H700 and installed the LSI 9211-8i and rebooted.. All drives showed up, was able to install Proxmox with ZFS in RAID-Z2 mode.. working great so far!

Got ZED working for ZFS notifcations by email.. A great way keep track on the health of your ZFS pools.

The only item I haven't figured out yet is the LED control to signal when certain disks have problems.. I understand "ledmon" is the thing to use, however need to figure out if my Dell R810 supports the SES protocols for communicating with the LED's..
 
The only item I haven't figured out yet is the LED control to signal when certain disks have problems

zed is your best friend here. It is impossible that you face a situation when hdd X have a problem, and zfs pool will not be able to see this problem( => zed will send you a mail). Also you can use the script from this URL:

https://calomel.org/zfs_health_check_script.html

, who cand check your pool for this:

  • Health - Check if all zfs volumes are not degraded or broken in any way.
  • Capacity - Make sure all pool capacities are below 80% for best performance.
  • Errors - Check the columns for READ, WRITE and CKSUM (checksum) drive errors.
  • Scrub Expired - Check if all volumes have been scrubbed in at least the last 8 days.
I used this tool for many years and it is (super)rock solid!

And also the hddtemp can check your hdd's temperature. With some scripting skills, you can send emails alarms if some higher temperature is in your server.

Good luck, and do not waste your time with ... some simple LEDs ;)
 
  • Like
Reactions: pro lamer

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!