A dual socket system with 2x Intel Xeon E5-2690v4 has the potential for 72800 "cpu units" of performance (more with turbo+hyperthreading but lets leave that for now)
a dual socket system with 2x Epyc 9575F has the potential for 422400 "cpu units"...
E5-2690v4 is 14c@2.6GHz, 135W.
Epyc 9575F is 64c@3.3GHz, 400W.
even if we ignore the MUCH newer process node, WAY faster memory, PCIe generations and just counted each at equal IPC:
AMD (64*3300)/400= 528 instructions per watt
Intel...
so before any answer would be applicable...
why? what is your use case?
Also, 40g for cluster traffic is effectively the same as 10g (same latency.) you should be fine, but depending on the REST of your system architecture its will likely not...
Hello, please take a look at the following bug report [1].
Therein it is explained that there is a bug in the MSA 2050 firmware: the device reports (via the LBPRZ bit) that after discarding a block reading from it again will return zeroes, but...
its way too dumb ;) since your crush rule requires an osd on three different nodes, and you have two nodes with excess capacity... the excess capacity is unused.
but I think you're going about this the wrong way. What is your REQUIRED usable...
Read what it says carefully. pve can use mdadm easily and with full tooling support BUT YOU SHOULDNT. I agree with their assessment ;)
There are usecases for it. the converse of my original assertion is that there are cases where you MUST use it...
the "fastest" would be lvm-thick on mdraid10 or directly to individual devices.
the thing you need to realize is if the speed is above what you actually NEED you are losing out on other features that may be of much more importance, namely...
Correct. those are not supported options.
Its not a bug. PVE doesnt have a mechanism to "talk" to your storage device. the only way to make sure this doesn't happen is to NOT thin provision your luns on the storage.
In what sense? the only real "issue" with this generation of cpu is their atrocious performance/watt, not performance in general. Intel's product portfolio is available tall (core speed) and wide (core count) to suit a wide variety of need.
More...
Any of the paths can be used, but as @bbgeek17 pointed out only one controller will ACTUALLY be serving a given lun, so any traffic pointed to the other controller simply gets handed over the internal communication bus inside your SAN. This is...
The problem with that statement is that there are too many variables to consider. When dealing with Truenas, you're dealing with their ZFS stack passing zvols to iscsitgt; this is a known quantity- but the zpool organization, makeup (number/type...
Lesson for the future: dont mess with your production machine.
It probably wont. you just wont be able to access various services depending on the stuff you broke.