Search results

  1. Suggestion on PVE Cluster with CEPH

    Its more of a philosophical question than a technical one. There is no single answer to the question. Both approaches are valid and it will really come down to preference more than technology. You can make strong arguments why the fully hyperconverged approach is better - and equally valid...
  2. How do I share physical drives to Freenas VM?

    There would be no need to run FreeNAS in a VM on Proxmox if Proxmox provided an even moderately decent way to manage and present shares. The FreeNAS gui and tools are what people want. Until this is available people will still want to run a separate NAS platform in a VM. The "official"...
  3. How do I share physical drives to Freenas VM?

    Nothing about that snippet from freenas.org discourages running FreeNAS in a hypervisor. In fact, it exists precisely to describe how to do it safely. Pass your disks directly through to the VM and bypass the Hypervisor. Note that the procedure referenced by @Belokan does not do a true pass...
  4. Latest Ceph User Survey shows that 33% use Proxmox VE as Platform

    Not surprised to see ProxmoxVE to heavily represented among all Ceph installations. Proxmox is one of the only virtualization environments that makes Ceph deployments reasonably easy. Well done to the Proxmox team.
  5. 4 nodes ceph configuration

    Yes, you can. But you might be better off not doing the Raid. Either let use one drive for OS/Local storage and let Ceph have the other three 1TB drives directly as OSDs or - better - get a small SSD for OS/Local storage and let Ceph manage all four drives directly as separate OSDs. The...
  6. Proxmox VE Ceph Benchmark 2018/02

    @udo nailed it. Its not a matter of 3 node clusters not working - its a matter of how do you want them to work when there is a failure. You need the "+1" node in order to bring the cluster back to a stable operating state. It should continue to work without it, but you don't want it to stay...
  7. Random reboot - HW Failure or something else

    Quick and dirty test: disable SpeedStep. You don't want to leave it this way permanently because your idle-power usage will go through the roof, but if your halt/reboots go away you'll have some more evidence that this was probably the issue. Longer term - turn SpeedStep back on, but find...
  8. Random reboot - HW Failure or something else

    There is a known - but very very rare - issue with some Intel CPUs where upon reaching the lowest power states it cannot "restart", the CPU looks dead and the system will watchdog out. In these cases the CPU just stops and there is no "kernel panic" which leaves no opportunity for the Kernel to...
  9. Random reboot - HW Failure or something else

    Both restarts are in the wee-hours of the morning? Is this when the system is mostly idle or do you have a load running on it?
  10. Bluestore / SSD / Size=2?

    I've spun this around every way I can think of and have not found a scenario where EC pools make sense in a small cluster. They have immense value in a large to very-large clusters. IMNSHO, EC pools start to make sense when your pool consists of at least 12 nodes (8+3 EC pools, with at least...
  11. Minimum requirements for full high availability (PVE+CEPH)

    Absolute minimum: 3 nodes, 3 OSDs. Will still run with a failed node - but Ceph will report "degraded". There are some cases where Ceph may not be able to support writes, at which point VMs with images on RBD may stall or fault. Good for labs and small deployments that need to be "sorta HA"...
  12. Proxmox VE Ceph Benchmark 2018/02

    Agreed. But you make the claim about being able to run a 3-node cluster and still access the data with a node OOS. While it is "true", it is also dangerous guidance and shouldn't be given without a caution - even in a bechmarking note.
  13. Proxmox VE Ceph Benchmark 2018/02

    Interesting and useful write up. A bit summarized on the results presented (thin on details) but still quite useful. I was surprised to see the large read performance gain with 100Gbe network vs 10Gbe, especially given the close race between them on the write side. Some more digging on this -...
  14. Cores CPUs Threads

    KVM/qemu always presents a simulation of a hardware configuration to the VM. It can make the hardware appear in almost any configuration you want. If you want to have 4 virtual CPUs available to the VM you can tell it that it has 1 socket with 4 cores, two sockets with 2 cores each or 4...
  15. Disk spin down

    The problem is the Proxmox Stats Deamon (pvestatd). It checks stats on all of your drives - and it does it rather frequently. You can get your drives to spin down if you turn it off: #pvestatd disable But if you do then you will also loose all of the statistics on the "summary" page and if...
  16. Ceph OSD Journal on USB3.0 SSD?

    You'd really not enjoy running the journal on USB3 :) If you have an extra PCIe x4 (or larger) slot you could do much better with an M.2 NVMe SSD. Almost any would do better than the USB3 journal - but you could get some that are REALLY good. Fit it into the slot with a simple M.2 PCIe...
  17. Ceph - EC-Pool Setup with 3 hosts

    Fair. But with respect to the OPs question: what is the advantage of EC with 3+3 on a 3-node cluster? Firstly, I do not think you get a set of placement rules that would guarantee resiliency against a single host failure, so depending on your goals for the cluster the EC pool might not even...
  18. Ceph - EC-Pool Setup with 3 hosts

    No. I do not believe that you can ensure the placement for a 3+3 EC pool on 3 Hosts such that you can still read with one host failed. Also, I'm not sure what you gain from this configuration vs a "normal" pool with "replica=1" (i.e., two copies of all data). With a 3+3 EC pool you require 2x...
  19. pvestatd doesn't let HDDs go to sleep

    A bit of a necro to an old post - but it really is a problem that pvestatd actually reads data from your drives and does this so aggressively. As noted by the OP, it prevents drives from going idle using tools like hd-idle. Some will argue to benefit of idling your drives, but if the user...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!