ZFS with PERC H745: is RAID0 the same as JBOD?

Tony

Renowned Member
Nov 4, 2010
111
9
83
I am trying to configure ZFS on a server with PERC H745 controller and 4 disks. The manual says this is a raid card that supports eHBA (Enhanced HBA) mode. I set the card to that mode, and set all the disks to non-RAID mode. After that the disks show up in Proxmox as individual disks. However when I inspect the controller using the perccli utility, it says there are 4 virtual drives:
Code:
Virtual Drives = 4

VD LIST :
=======

-------------------------------------------------------------------------
DG/VD TYPE  State Access Consist Cache Cac sCC       Size Name
-------------------------------------------------------------------------
0/3   RAID0 Optl  RW     Yes     NRWTD -   OFF 931.512 GB NonRAID Disk 3
1/5   RAID0 Optl  RW     Yes     NRWTD -   OFF 931.512 GB NonRAID Disk 5
2/7   RAID0 Optl  RW     Yes     NRWTD -   OFF 931.512 GB NonRAID Disk 7
3/1   RAID0 Optl  RW     Yes     NRWTD -   OFF 931.512 GB NonRAID Disk 1
-------------------------------------------------------------------------

It's confusing that perccli says these are virtual drive of type RAID0 (but in the name there is "NonRAID")

I tried to set the disks to JBOD mode, but it didn't work (operation not supported).

So I wonder if I can go ahead and use these "virtual drives" for ZFS?

I know this is not specific to Proxmox; just wonder if someone here has faced the same issue.

thanks in advance for any hint.
 
I am 99% sure when you set a PERC controller to JBOD each disk becomes a singular RAID0. So you have 4 single disk RAID 0 arrays.
 
Have you checked if proxmox can see the SMART status of the drives? Most often, RAID cards don't give direct access to the drives and their smart data. This being the case, the data integrity of ZFS is therefore potentially compromised so while it will *work*, it would not be a recommended setup. Ask yourself, how valuable will the data stored on these drives be to you?
 
Have you checked if proxmox can see the SMART status of the drives? Most often, RAID cards don't give direct access to the drives and their smart data. This being the case, the data integrity of ZFS is therefore potentially compromised so while it will *work*, it would not be a recommended setup. Ask yourself, how valuable will the data stored on these drives be to you?
if Proxmox can see the SMART status of the drives, then they are exposed directly to the OS as raw disks and not as RAID0 drives? (I admit I don't understand the difference well enough, just read from ZFS docs that one is preferred over another).

Or is there a reliable way to check if it's raw disk or RAID0 drive?
 
If SMART data is shown, the chances are the drive is being passed directly, although I would agree with @vix9, it's most likely raid0. ZFS is a pretty complex beast under the hood so I personally wouldn't run ZFS on a RAID controller.

Your choices would seem to be
1. risk it
2. Use your controller as a full RAID controller and use LVM format
3. buy an LSI controller to use as a HBA
 
  • Like
Reactions: vix9
I did configure 4 disks as follows:
1 disk as Non-RAID
3 disks as 1 single RAID0

perccli sees them as follows:

Code:
-------------------------------------------------------------------------
DG/VD TYPE  State Access Consist Cache Cac sCC       Size Name           
-------------------------------------------------------------------------
0/1   RAID0 Optl  RW     Yes     NRWTD -   OFF 931.512 GB NonRAID Disk 1
1/239 RAID0 Optl  RW     Yes     RWBD  -   OFF   2.727 TB               
-------------------------------------------------------------------------

Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|Dgrd=Degraded
Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked|
Consist=Consistent|R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
FWB=Force WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
Check Consistency

I can also access SMART data of the Non-RAID disk. So it seems despite being reported as a virtual drive by perccli, the disk in Non-RAID is really in pass-through mode. The flags from perccli also seem to support it:

non-RAID: NRWTD = No Read Ahead, WriteThrough, Direct IO
RAID0: RWBD = Read Write, Blocked, Direct IO

btw I don't mind running ZFS on top of hardware raid; there are people on ServerFault who do that on a regular basis in production so I think it's mostly a myth that ZFS on hw raid is a bad thing. Maybe one day I will regret this.
 
I did configure 4 disks as follows:
1 disk as Non-RAID
3 disks as 1 single RAID0

perccli sees them as follows:

Code:
-------------------------------------------------------------------------
DG/VD TYPE  State Access Consist Cache Cac sCC       Size Name          
-------------------------------------------------------------------------
0/1   RAID0 Optl  RW     Yes     NRWTD -   OFF 931.512 GB NonRAID Disk 1
1/239 RAID0 Optl  RW     Yes     RWBD  -   OFF   2.727 TB              
-------------------------------------------------------------------------

Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|Dgrd=Degraded
Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked|
Consist=Consistent|R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
FWB=Force WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
Check Consistency

I can also access SMART data of the Non-RAID disk. So it seems despite being reported as a virtual drive by perccli, the disk in Non-RAID is really in pass-through mode. The flags from perccli also seem to support it:

non-RAID: NRWTD = No Read Ahead, WriteThrough, Direct IO
RAID0: RWBD = Read Write, Blocked, Direct IO

btw I don't mind running ZFS on top of hardware raid; there are people on ServerFault who do that on a regular basis in production so I think it's mostly a myth that ZFS on hw raid is a bad thing. Maybe one day I will regret this.
Well, an easy way to check. Do you have any disks with known SMART errors that you are waiting to destroy? Pop one of them in and see what happens. I know having full access to SMART has definitely been a benefit in the past on our FreeNAS/TrueNAS machines.
 
I assembled a zfs pool from a bunch of disks that should be retied by now.

I can see the point of ZFS in HBA mode: ZFS can have a "closer look" at the individual disks and hence can potentially handle the errors better in case it has access to a single vdev from hw raid. In the later cases ZFS sees only 1 "device"; the job of ensuring integrity of the physical disks is handled by the hw raid. Which needs not to be worse if the hw raid card is decent (just my guess).


Code:
root@fenox-pve:~# zpool status
  pool: zpool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: scrub repaired 0B in 03:32:00 with 0 errors on Sun Apr 10 03:56:01 2022
config:

    NAME                                            STATE     READ WRITE CKSUM
    zpool                                           ONLINE       0     0     0
      raidz2-0                                      ONLINE       0     0     0
        ata-ST2000DM001-9YN164_Z1E30Q97             ONLINE       0     0     0
        ata-ST2000DM001-1CH164_Z1E37CJ7             ONLINE       0     0     0
        ata-WDC_WD2002FYPS-18U1B0_WD-WCAVY2015447   ONLINE       9     0     0
        ata-WDC_WD2002FYPS-18U1B0_WD-WCAVY2027446   ONLINE       7     0     0
        ata-WDC_WD2002FYPS-18U1B0_WD-WCAVY2127591   ONLINE      13     0     0
        ata-WDC_WD2002FYPS-18U1B0_WD-WCAVY2175802   ONLINE       7     0     0
        ata-ST2000DM001-1CH164_Z2F0EDXQ             ONLINE       0     0     0
        ata-WDC_WD2002FYPS-18U1B0_WD-WCAVY2172661   ONLINE      12     0     0
        ata-Hitachi_HUA722020ALA330_JK1151YAHHHTGZ  ONLINE       0     0     0
        ata-WDC_WD2002FYPS-18U1B0_WD-WCAVY2175792   ONLINE       5     2     2
        ata-WDC_WD2002FYPS-18U1B0_WD-WCAVY2175800   ONLINE       4     0     0
        ata-WDC_WD2002FYPS-18U1B0_WD-WCAVY2186897   ONLINE       5     2     0

errors: No known data errors
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!