Dell PERC IT Mode To IR Mode

Grunt

Member
Sep 6, 2022
29
5
8
I have a PERC H200 that was flashed, prior to my ownership, to IT mode. I used this server as a FreeNAS host. I want to repurpose it as a Proxmox host and use its local storage. It seems like it would be better to use hardware RAID instead of leaving it up to Proxmox to raid these drives together. However, I can't seem to find a reliable guide on how to reflash to IR mode. The best I found was this guide, but it says, "Download and copy the H200 folder to the flash drive." and doesn't give the source of that folder...

Anybody have an advice/opinions on this? It's an old PowerEdge R510, so it's not SUPER powerful and it's meant as a test/lab environment
 
Last edited:
I can't help you with the Flash, but what is with ZFS? It can turn the drives into a clean and stable raid. Raid level recommendation for performance: Raid10
Are you recommending I configure Proxmox to handle the RAID? Wouldn't I be sacrificing CPU using software RAID instead of letting the RAID card do it? Also, isn't ZFS memory hungry? I know that was typically the guidance with FreeNAS, ensure you had lots of RAM. This thing is maxed out at 128GB, so it's not a huge node. I'd like to keep that RAM available for the guests to use without penalizing storage performance.
 
ZFS provides bit-rot protection, snapshots, and compression. I use a H200 (flashed to IT-mode) in a Dell R200 as a bare-metal Proxmox Backup Server using ZFS RAID-1. No issues.
 
Last edited:
Are you recommending I configure Proxmox to handle the RAID? Wouldn't I be sacrificing CPU using software RAID instead of letting the RAID card do it?
Your H200 is only capable of RAID1, so switching to software raid doesnt really cost you anything performance wise. What you GAIN using zfs instead is all the stuff @jdancer mentioned. JUST the compression and snapshots make it preferred to hardware raid EVEN IF it was faster- which it wouldn't be.
 
Are you recommending I configure Proxmox to handle the RAID? Wouldn't I be sacrificing CPU using software RAID instead of letting the RAID card do it? Also, isn't ZFS memory hungry? I know that was typically the guidance with FreeNAS, ensure you had lots of RAM. This thing is maxed out at 128GB, so it's not a huge node. I'd like to keep that RAM available for the guests to use without penalizing storage performance.
So first of all, ZFS is not a software raid. What you mean is MDADM. Basically, ZFS requires a little more performance. But it also depends on the use case. If you are interested, I would recommend trying out features in a VM and reading the documentation. It also tells you how to reduce the RAM Utilization. I have been running a backup server with ZFS for a good 8 years with only 32GB RAM (SSD only) and it works as it should. And there are also small VMs on it. Where VM i/o is separate from backup storage. Basically ZFS is hardware-independent.
It also depends on whether you need ZFS features or not. If you use the raid controller and your controller die, then you can't just put your drives back into service without the same type raid controller. These are all things that can help you decide.
 
Last edited:
If you use the raid controller and your drives die, then you can't just put them back into service without the same raid controller.
What ?? For second life of your died disks it must be the same as before (same combi to raidctrl of old life) ... where does that fairy tale come from ? The raidctrl doesn't write error messages to disks that block usage on other ctrl. but anywhy when disk died which new life should that be ? :)
 
  • Like
Reactions: fireon
What ?? For second life of your died disks it must be the same as before (same combi to raidctrl of old life) ... where does that fairy tale come from ? The raidctrl doesn't write error messages to disks that block usage on other ctrl. but anywhy when disk died which new life should that be ? :)
Today is mix-up day. I meant the controller, of course, I changed it above. o_O:p
 
If you use the raid controller and your controller die, then you can't just put your drives back into service without the same type raid controller.
You still have couple of options then: same raidctrl, same manufacturer family raidctrl. newer or older, go without hw-raid with mdadm or in the end are able to use zfs instead also. So that isn't as bad options but even in last 30years never had any broken LSI raidctrl. seen on really many servers supported. :)
 
You still have couple of options then: same raidctrl, same manufacturer family raidctrl. newer or older, go without hw-raid with mdadm or in the end are able to use zfs instead also. So that isn't as bad options but even in last 30years never had any broken LSI raidctrl. seen on really many servers supported. :)
HPE was a little different. I did have a broken controller a few times :eek: . But what was even worse than a broken controller was a bad firmware update. This then corrupted the data. I can still remember it well. It was quite a back and forth with HP.
But I've only been using Open Hardware for a long time now. SuperMicro or Asus board, LSI controller (but with ZFS)... then you are happy. No failures.
 
HPE was a little different. I did have a broken controller a few times
Of course we observed quiet a lot broken P8<xy> ctrl. but regulary did fw maintenance updates also. Could switch from P820 to P800, change the same or go from P800 to P820 or P812, never any problem, just switch the card and boot server, that's all.
What a pity you observed but that's not normal with HP.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!