Yes, you can.
But you might be better off not doing the Raid. Either let use one drive for OS/Local storage and let Ceph have the other three 1TB drives directly as OSDs or - better - get a small SSD for OS/Local storage and let Ceph manage all four drives directly as separate OSDs.
The...
@udo nailed it. Its not a matter of 3 node clusters not working - its a matter of how do you want them to work when there is a failure. You need the "+1" node in order to bring the cluster back to a stable operating state. It should continue to work without it, but you don't want it to stay...
Quick and dirty test: disable SpeedStep. You don't want to leave it this way permanently because your idle-power usage will go through the roof, but if your halt/reboots go away you'll have some more evidence that this was probably the issue.
Longer term - turn SpeedStep back on, but find...
There is a known - but very very rare - issue with some Intel CPUs where upon reaching the lowest power states it cannot "restart", the CPU looks dead and the system will watchdog out. In these cases the CPU just stops and there is no "kernel panic" which leaves no opportunity for the Kernel to...
I've spun this around every way I can think of and have not found a scenario where EC pools make sense in a small cluster.
They have immense value in a large to very-large clusters. IMNSHO, EC pools start to make sense when your pool consists of at least 12 nodes (8+3 EC pools, with at least...
Absolute minimum: 3 nodes, 3 OSDs. Will still run with a failed node - but Ceph will report "degraded". There are some cases where Ceph may not be able to support writes, at which point VMs with images on RBD may stall or fault. Good for labs and small deployments that need to be "sorta HA"...
Agreed.
But you make the claim about being able to run a 3-node cluster and still access the data with a node OOS. While it is "true", it is also dangerous guidance and shouldn't be given without a caution - even in a bechmarking note.
Interesting and useful write up. A bit summarized on the results presented (thin on details) but still quite useful.
I was surprised to see the large read performance gain with 100Gbe network vs 10Gbe, especially given the close race between them on the write side. Some more digging on this -...
KVM/qemu always presents a simulation of a hardware configuration to the VM. It can make the hardware appear in almost any configuration you want. If you want to have 4 virtual CPUs available to the VM you can tell it that it has 1 socket with 4 cores, two sockets with 2 cores each or 4...
The problem is the Proxmox Stats Deamon (pvestatd). It checks stats on all of your drives - and it does it rather frequently.
You can get your drives to spin down if you turn it off:
#pvestatd disable
But if you do then you will also loose all of the statistics on the "summary" page and if...
You'd really not enjoy running the journal on USB3 :)
If you have an extra PCIe x4 (or larger) slot you could do much better with an M.2 NVMe SSD. Almost any would do better than the USB3 journal - but you could get some that are REALLY good. Fit it into the slot with a simple M.2 PCIe...
Fair. But with respect to the OPs question: what is the advantage of EC with 3+3 on a 3-node cluster? Firstly, I do not think you get a set of placement rules that would guarantee resiliency against a single host failure, so depending on your goals for the cluster the EC pool might not even...
No. I do not believe that you can ensure the placement for a 3+3 EC pool on 3 Hosts such that you can still read with one host failed.
Also, I'm not sure what you gain from this configuration vs a "normal" pool with "replica=1" (i.e., two copies of all data).
With a 3+3 EC pool you require 2x...
A bit of a necro to an old post - but it really is a problem that pvestatd actually reads data from your drives and does this so aggressively.
As noted by the OP, it prevents drives from going idle using tools like hd-idle. Some will argue to benefit of idling your drives, but if the user...
Its been a while since I last gave Gluster a go, but recovering from faults is why I stopped using it. When things go bad they can get very bad very fast. My experience was bad enough that I've not yet been willing to rely on it.
I've been running with Ceph for a couple of years now, started...
Odroid-C2 would be limited by Kernel support. Officially it only supports an older (3.14) kernel and newer kernels have significant issues with USB and Networking (and video, but not so important here). There is a mainline kernel in the works and it might be released with 4.14 soon - but its...
Not sure I'd change it to 0 - which totally disables swap. Most experts suggest sappiness=10 for server workloads with adequate RAM installed. 0 could lead to fault/shutdown if something goes off the reservation and gobbles ram.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.