Search results

  1. P

    4 nodes ceph configuration

    Yes, you can. But you might be better off not doing the Raid. Either let use one drive for OS/Local storage and let Ceph have the other three 1TB drives directly as OSDs or - better - get a small SSD for OS/Local storage and let Ceph manage all four drives directly as separate OSDs. The...
  2. P

    Proxmox VE Ceph Benchmark 2018/02

    @udo nailed it. Its not a matter of 3 node clusters not working - its a matter of how do you want them to work when there is a failure. You need the "+1" node in order to bring the cluster back to a stable operating state. It should continue to work without it, but you don't want it to stay...
  3. P

    Random reboot - HW Failure or something else

    Quick and dirty test: disable SpeedStep. You don't want to leave it this way permanently because your idle-power usage will go through the roof, but if your halt/reboots go away you'll have some more evidence that this was probably the issue. Longer term - turn SpeedStep back on, but find...
  4. P

    Random reboot - HW Failure or something else

    There is a known - but very very rare - issue with some Intel CPUs where upon reaching the lowest power states it cannot "restart", the CPU looks dead and the system will watchdog out. In these cases the CPU just stops and there is no "kernel panic" which leaves no opportunity for the Kernel to...
  5. P

    Random reboot - HW Failure or something else

    Both restarts are in the wee-hours of the morning? Is this when the system is mostly idle or do you have a load running on it?
  6. P

    Bluestore / SSD / Size=2?

    I've spun this around every way I can think of and have not found a scenario where EC pools make sense in a small cluster. They have immense value in a large to very-large clusters. IMNSHO, EC pools start to make sense when your pool consists of at least 12 nodes (8+3 EC pools, with at least...
  7. P

    Minimum requirements for full high availability (PVE+CEPH)

    Absolute minimum: 3 nodes, 3 OSDs. Will still run with a failed node - but Ceph will report "degraded". There are some cases where Ceph may not be able to support writes, at which point VMs with images on RBD may stall or fault. Good for labs and small deployments that need to be "sorta HA"...
  8. P

    Proxmox VE Ceph Benchmark 2018/02

    Agreed. But you make the claim about being able to run a 3-node cluster and still access the data with a node OOS. While it is "true", it is also dangerous guidance and shouldn't be given without a caution - even in a bechmarking note.
  9. P

    Proxmox VE Ceph Benchmark 2018/02

    Interesting and useful write up. A bit summarized on the results presented (thin on details) but still quite useful. I was surprised to see the large read performance gain with 100Gbe network vs 10Gbe, especially given the close race between them on the write side. Some more digging on this -...
  10. P

    Cores CPUs Threads

    KVM/qemu always presents a simulation of a hardware configuration to the VM. It can make the hardware appear in almost any configuration you want. If you want to have 4 virtual CPUs available to the VM you can tell it that it has 1 socket with 4 cores, two sockets with 2 cores each or 4...
  11. P

    Disk spin down

    The problem is the Proxmox Stats Deamon (pvestatd). It checks stats on all of your drives - and it does it rather frequently. You can get your drives to spin down if you turn it off: #pvestatd disable But if you do then you will also loose all of the statistics on the "summary" page and if...
  12. P

    Ceph OSD Journal on USB3.0 SSD?

    You'd really not enjoy running the journal on USB3 :) If you have an extra PCIe x4 (or larger) slot you could do much better with an M.2 NVMe SSD. Almost any would do better than the USB3 journal - but you could get some that are REALLY good. Fit it into the slot with a simple M.2 PCIe...
  13. P

    Ceph - EC-Pool Setup with 3 hosts

    Fair. But with respect to the OPs question: what is the advantage of EC with 3+3 on a 3-node cluster? Firstly, I do not think you get a set of placement rules that would guarantee resiliency against a single host failure, so depending on your goals for the cluster the EC pool might not even...
  14. P

    Ceph - EC-Pool Setup with 3 hosts

    No. I do not believe that you can ensure the placement for a 3+3 EC pool on 3 Hosts such that you can still read with one host failed. Also, I'm not sure what you gain from this configuration vs a "normal" pool with "replica=1" (i.e., two copies of all data). With a 3+3 EC pool you require 2x...
  15. P

    pvestatd doesn't let HDDs go to sleep

    A bit of a necro to an old post - but it really is a problem that pvestatd actually reads data from your drives and does this so aggressively. As noted by the OP, it prevents drives from going idle using tools like hd-idle. Some will argue to benefit of idling your drives, but if the user...
  16. P

    Ceph vs GlusterFS

    Its been a while since I last gave Gluster a go, but recovering from faults is why I stopped using it. When things go bad they can get very bad very fast. My experience was bad enough that I've not yet been willing to rely on it. I've been running with Ceph for a couple of years now, started...
  17. P

    ARM = Future of Proxmox?

    Odroid-C2 would be limited by Kernel support. Officially it only supports an older (3.14) kernel and newer kernels have significant issues with USB and Networking (and video, but not so important here). There is a mainline kernel in the works and it might be released with 4.14 soon - but its...
  18. P

    ARM = Future of Proxmox?

    LXC might be interesting but the ARM community, in general, seems much more interested in Docker/Kubernetes.
  19. P

    PVE 5.0 Swap Usage

    Not sure I'd change it to 0 - which totally disables swap. Most experts suggest sappiness=10 for server workloads with adequate RAM installed. 0 could lead to fault/shutdown if something goes off the reservation and gobbles ram.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!