Hi, i would like to ask about performance of my idea.
Now i have low power server (consumer motherboard) Asrock N3700M. Now it has two sata drives in zfs-mirror onboard (rootfs+VM) performance is suitable. But i'm thinking how much more i can squeeze that hw what i have according to purchase prices of new hw.
i would like to give it some more performance.
Well. this mobo has three pci-e 2.0 slots. physically 1x16lanes ,2x1line. What i found, the x16 slot is supporting only one pcie line according to intel ARK of N3700 cpu. Didn't found any block schematic of motherboard components. In x16slot i have pretty old hardware raid card ASR-3405 with separate oldschool raid pool. Everything is working fine.
So my question: If i will buy two [simple pciex1 to single m2 slot adapters] and plug it into mobo with nvme drives (doesnt matter speeds, because nvme is faster than these slots) than i guess i can gain from it according to iops. i would like to use it as VM/CT storage.
As specification of pcie says, pcie 2.0 x1 can handle upto 500MBps simplex and 1GBps duplex. (source Wikipedia). I dont know how much overhead is there.
These days are nvme drives pretty fast, lets say 3GBps+. If i set those drives with lower block size as 512b or 4k with ZFS partition (i feel addict on zfs) then i think i can satturate that pcie x1 line pretty much . Depends on drive yes, but in theory it should work good and i would gain many thousands or hundred thousands iops from nvme. Bandwith is not top priority as i'm keeping my whole station usage in light condition. As everybody knows from these two HDD i can get only few hundred iops.
Next i would restimate that the next bottleneck if i will implement that idea will be that slow cpu.
Just little PS: I have also much faster computers (HPE G8), but i wanted to minimize power draw. and i like i can benefit of some 3.5" slots of custom 4U case, so i dont want to use NUCs.
Thanks for answers.
Now i have low power server (consumer motherboard) Asrock N3700M. Now it has two sata drives in zfs-mirror onboard (rootfs+VM) performance is suitable. But i'm thinking how much more i can squeeze that hw what i have according to purchase prices of new hw.
i would like to give it some more performance.
Well. this mobo has three pci-e 2.0 slots. physically 1x16lanes ,2x1line. What i found, the x16 slot is supporting only one pcie line according to intel ARK of N3700 cpu. Didn't found any block schematic of motherboard components. In x16slot i have pretty old hardware raid card ASR-3405 with separate oldschool raid pool. Everything is working fine.
So my question: If i will buy two [simple pciex1 to single m2 slot adapters] and plug it into mobo with nvme drives (doesnt matter speeds, because nvme is faster than these slots) than i guess i can gain from it according to iops. i would like to use it as VM/CT storage.
As specification of pcie says, pcie 2.0 x1 can handle upto 500MBps simplex and 1GBps duplex. (source Wikipedia). I dont know how much overhead is there.
These days are nvme drives pretty fast, lets say 3GBps+. If i set those drives with lower block size as 512b or 4k with ZFS partition (i feel addict on zfs) then i think i can satturate that pcie x1 line pretty much . Depends on drive yes, but in theory it should work good and i would gain many thousands or hundred thousands iops from nvme. Bandwith is not top priority as i'm keeping my whole station usage in light condition. As everybody knows from these two HDD i can get only few hundred iops.
Next i would restimate that the next bottleneck if i will implement that idea will be that slow cpu.
Just little PS: I have also much faster computers (HPE G8), but i wanted to minimize power draw. and i like i can benefit of some 3.5" slots of custom 4U case, so i dont want to use NUCs.
Thanks for answers.