zpool status
and zfs list
is always useful to get a better understanding what exactly you are doing.pool: DataA
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
DataA ONLINE 0 0 0
scsi-362cea7f0923c9100285ccdc64d721c33 ONLINE 0 0 0
errors: No known data errors
pool: DataB
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
DataB ONLINE 0 0 0
scsi-362cea7f0923c9100285ccdde4edb490c ONLINE 0 0 0
errors: No known data errors
pool: DataC
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
DataC ONLINE 0 0 0
scsi-362cea7f0923c9100285cce0150f64a35 ONLINE 0 0 0
errors: No known data errors
NAME USED AVAIL REFER MOUNTPOINT
DataA 432K 42.3T 96K /DataA
DataB 42.3T 443M 96K /DataB
DataB/vm-101-disk-0 42.3T 42.3T 56K -
DataC 408K 42.3T 96K /DataC
zfs likes raw "dumb" disks, because any layer of raid or smth may hide or lie about some information that zfs needsSo i should present disks directly to proxmox and skip hw raid? Dooing so i will also lose idrac disk monitoring functionality. Is there any suggestion if one would like to still use hw raid?
According to......, but it will be unable to recover, as there is a single copy of that data (from zfs point of view)
But as far as I understand you would loose 50% of your storage. And if a disk fails you could have bad luck and both copies could be stored on the same physical drive because ZFS can't know where the HW raid is storing the data.zfs set copies=2 ... may help for hwraid
But as far as I understand you would loose 50% of your storage. And if a disk fails you could have bad luck and both copies could be stored on the same physical drive because ZFS can't know where the HW raid is storing the data.
Only if you are already using some kind of parity in HW raid.And what if disk is replaced quickly and rebuild made by hwraid before monthly scrub? I think everything will be allright
So in my optinion that isn't really a good option if you just could skip the HW raid and use a ZFS mirror instead to get more usable space and better performance.
8x 8TB = 64TB raw capacity.I have experienced with this some time ago. And got an opposite results:
HWRAID
it's your server, but personally for more than 3 TB per disk (maybe even lower that limit) I would use no less than raid6 (raidz2); because at large disk sizes there is a great risk that a second drive will crash when rebuilding/resilvering and in that case you will lose dataI think il stick to hw raid for the moment and see how it goes, With software raid (ZFS) its using alot of ram and cpu to simulate what hw raid can already do anyway. Currently made raid 5 with 8*8 disks in hw and added them to vm via lvm-thin and voila 50,8TB usable space in windows, that can also be used to 100%. + can keep all the ram for actual vm usage.
We use essential cookies to make this site work, and optional cookies to enhance your experience.