This, while true, is the wrong perspective. what is the CONSEQUENCE of downtime? put a cost on that, and you have an economic baseline.
Its one thing if your massive ecommerce platform is out. its another if you cant access your emails for a...
there are a few things you should be aware of:
using /dev/nvme* for zpool vdev markers is dangerous, since that nomenclature is POSITIONAL (which means, its specific to the slot you used, not the drive.) use WWNs instead. to do that, simply...
Hi @Nathan Stratton and all,
You need clear guidance here: do not do that unless you have a very compelling reason to.
a) Your hardware is discontinued and past the end of service, which significantly increases the likelihood of component...
The reason you cant find the answer is because its not something you can answer in a vaccum. as I alluded to above, it depends on just how dependable the network is, and how spammy/sensitive the service using it is.
"conventional wisdom" has...
A dual socket system with 2x Intel Xeon E5-2690v4 has the potential for 72800 "cpu units" of performance (more with turbo+hyperthreading but lets leave that for now)
a dual socket system with 2x Epyc 9575F has the potential for 422400 "cpu units"...
E5-2690v4 is 14c@2.6GHz, 135W.
Epyc 9575F is 64c@3.3GHz, 400W.
even if we ignore the MUCH newer process node, WAY faster memory, PCIe generations and just counted each at equal IPC:
AMD (64*3300)/400= 528 instructions per watt
Intel...
so before any answer would be applicable...
why? what is your use case?
Also, 40g for cluster traffic is effectively the same as 10g (same latency.) you should be fine, but depending on the REST of your system architecture its will likely not...
Hello, please take a look at the following bug report [1].
Therein it is explained that there is a bug in the MSA 2050 firmware: the device reports (via the LBPRZ bit) that after discarding a block reading from it again will return zeroes, but...
its way too dumb ;) since your crush rule requires an osd on three different nodes, and you have two nodes with excess capacity... the excess capacity is unused.
but I think you're going about this the wrong way. What is your REQUIRED usable...
Read what it says carefully. pve can use mdadm easily and with full tooling support BUT YOU SHOULDNT. I agree with their assessment ;)
There are usecases for it. the converse of my original assertion is that there are cases where you MUST use it...
the "fastest" would be lvm-thick on mdraid10 or directly to individual devices.
the thing you need to realize is if the speed is above what you actually NEED you are losing out on other features that may be of much more importance, namely...
Correct. those are not supported options.
Its not a bug. PVE doesnt have a mechanism to "talk" to your storage device. the only way to make sure this doesn't happen is to NOT thin provision your luns on the storage.