Hey Boys & Girls,
I have hard times to decide whats the best Configuration between Performance and Size would be.
- In Short, i need to upgrade my Storage in the Server, because im running out of Space (50TB almost full), so i've ordered:
8x 20TB WD HDD's
1x Bifurbication Card (PCE4.0 x16 -> 4x4)
3x 990 Pro 1TB
2x 990 Pro 2TB
- I have 128gb ecc Ram & the Mainboard is an Asrock Rack x570d4i-2t, it has:
1x M2 (4.0x4) Slot
1x PCIe 4.0 X16 Slot (For the Bifurbication Card)
2x Oculink (Thats 8x Sata Ports)
- My idea is:
-- Oculink Sata Ports: All 8 HDD's in an ZFS-Z2 Array
-- M2 Mainboard Slot + 1 Bifurbication Card Slot: 2x(2tb 990 Pro) in Raid1 for OS + VMs/Containers
-- 2 Bifurbication Slots: 2x(1tb 990 Pro) in Raid1 as ZIL/LOG Cache
-- Last Bifurbication Slot: 1tb 990 Pro as L2Arc Read Cache
Now it would be nice to have addidionally and metadata vdev device, but i have no nvme slots left....
Adiitionally ZFS-Z2 Provides me a lot of Storage compared to Raid10, but an ZFS-Z2 gives me somehow headaches, because:
- The ZFS guys don't recommend ZFS-Z1 (Which is 33% Parity with 3 Disks)
- But ZFS-Z2 with 8 disks is even less Parity (25% with 8 Disks)
- On the sidenote, what i've found here on the Forums, that ZFS-Z2 and Raid10 are extremely similar in Performance.
- Keep in mind, i write Raid 10, but if, then i will do an ZFS Raid10 (4-way mirror vdev device), because of the ZFS benefits (scrubbing etc..)
What would you guys do with that Storage different as me?
Basically im collecting opinions.
An Raid 10 would give me 80Tb of Storage, an ZFS-Z2 would give me 140TB of Storage, which is a LOT more....
Cheers
I have hard times to decide whats the best Configuration between Performance and Size would be.
- In Short, i need to upgrade my Storage in the Server, because im running out of Space (50TB almost full), so i've ordered:
8x 20TB WD HDD's
1x Bifurbication Card (PCE4.0 x16 -> 4x4)
3x 990 Pro 1TB
2x 990 Pro 2TB
- I have 128gb ecc Ram & the Mainboard is an Asrock Rack x570d4i-2t, it has:
1x M2 (4.0x4) Slot
1x PCIe 4.0 X16 Slot (For the Bifurbication Card)
2x Oculink (Thats 8x Sata Ports)
- My idea is:
-- Oculink Sata Ports: All 8 HDD's in an ZFS-Z2 Array
-- M2 Mainboard Slot + 1 Bifurbication Card Slot: 2x(2tb 990 Pro) in Raid1 for OS + VMs/Containers
-- 2 Bifurbication Slots: 2x(1tb 990 Pro) in Raid1 as ZIL/LOG Cache
-- Last Bifurbication Slot: 1tb 990 Pro as L2Arc Read Cache
Now it would be nice to have addidionally and metadata vdev device, but i have no nvme slots left....
Adiitionally ZFS-Z2 Provides me a lot of Storage compared to Raid10, but an ZFS-Z2 gives me somehow headaches, because:
- The ZFS guys don't recommend ZFS-Z1 (Which is 33% Parity with 3 Disks)
- But ZFS-Z2 with 8 disks is even less Parity (25% with 8 Disks)
- On the sidenote, what i've found here on the Forums, that ZFS-Z2 and Raid10 are extremely similar in Performance.
- Keep in mind, i write Raid 10, but if, then i will do an ZFS Raid10 (4-way mirror vdev device), because of the ZFS benefits (scrubbing etc..)
What would you guys do with that Storage different as me?
Basically im collecting opinions.
An Raid 10 would give me 80Tb of Storage, an ZFS-Z2 would give me 140TB of Storage, which is a LOT more....
Cheers