Recent content by markmarkmia

  1. M

    Proxmox Ceph different OSD counts per host

    So, if I want to migrate over to larger drives, what would the best-practice approach to that be? Still attempt to keep the same amount of storage provided per host to remain balance (even if it affects performance a little)? Or is the number of OSDs per host being the same more important? (I...
  2. M

    Proxmox Ceph different OSD counts per host

    Ok - understood. I think a solution in my case will be to put Proxmox on M.2 SSDs on an PCIE adapter and boot from that, freeing up 6 2.5" drives on all servers in the cluster. I think this is what I will do. Thank you!
  3. M

    Proxmox Ceph different OSD counts per host

    Dumb newb question I'm sure, but when creating an OSD with pveceph, will it automatically sort out the CRUSH map stuff when using different sized OSDs or a different number of OSDs per host? Example: I've got 4 hosts that have 8 2.5" bays, two I use in RAID1 for Proxmox boot, the other 6 I am...
  4. M

    VZDump slow on ceph images, RBD export fast

    Frank, thank you for your script. It is working well for me to allow for fast daily backups, I'm looking to make Ceph my primary storage because of this (and that Ceph can do snapshots) instead of iSCSI. Do you foresee the ability to do continuous/synthetic fulls (never have to run a 'full'...
  5. M

    Bluestore / SSD / Size=2?

    Thanks Alex. I may have mis-stated what I had read, I think the primary OSD for every object was on an SSD, I'm not sure the exact crush configuration details, but the effect was the all writes went to the SSDs and then replicated to the spinners (with SSD WAL), and all reads went to the SSD...
  6. M

    Bluestore / SSD / Size=2?

    To add for posterity in case anyone else is googling this topic down the road. Here is probably the single biggest risk from some back-reading I've done on the Ceph-users list (credit goes to Ceph-user member Wido for explanation, I'm re-stating in my own words) In a 2/1 scenario even with...
  7. M

    Bluestore / SSD / Size=2?

    Thanks PigLover. I had been thinking 4-2 EC pool for RBD, I had heard it got a bad rap with a cache tier in front of it, but everyone's use case is a bit different. I have thought it over and I agree, I think even with SSDs having to do (in 4-2) 6 reads to write a stripe (to 6 OSDs) for one...
  8. M

    Bluestore / SSD / Size=2?

    I can create the pool, it's creating the RBD volume for the VM storage that is the problem.
  9. M

    Bluestore / SSD / Size=2?

    I agree, that's the standard calculation. But RAID10 seems to be acceptable for most people as the chances of a fault are statistically quite low as long as you aren't using enormous drives (or consumer drives with a lower BER). I'm trying to understand (as I somewhat but obviously not fully...
  10. M

    Bluestore / SSD / Size=2?

    Is it common for an object to be not valid? If I compare a ceph cluster on relatively reliable (dual Power supply, ECC RAM, UPS backed) servers with redundant switches/links, and enterprise grade MLC SSDs in the 400-500GB range, is my risk of data loss (roughly speaking) with Size=2 Minsize=1...
  11. M

    Bluestore / SSD / Size=2?

    I do see posts saying Size=2 minsize=1 is a bad idea. But some of the "worst" reasons this is/was a bad idea (data inconsistency if there are two mismatched copies of data because a rebalance started or writes happened and then an OSD comes back to life or something..) maybe seem to be...
  12. M

    Proxmox VE Ceph Benchmark 2018/02

    Is there a way to hack in support for direct-write EC pools in Ceph? I think the barrier presently is that we can't specify the data pool (since direct write EC pools still need to use a replicated pool for metadata). I feel for smaller networks this might help with throughput (halving the...
  13. M

    Proxmox + RAW EC RBD

    It's probably because the metadata for EC images still needs to be in a replicated pool and Proxmox will need special hooks for this when creating images within the EC RBD pool (just a guess though).
  14. M

    Ceph: Device usage LVM

    I came across this old post searching for answers myself to the same issue, so should someone else come across this post searching for answers to the same problem, I thought I'd add it (though I'm sure it's too late to help OP, lol). The solution it turns out is that you need to blacklist your...
  15. M

    Unable to join cluster - corosync.service failed

    I'm pretty much finding the same thing. I had set up PRoxmox a couple versions ago and no issues with Corosync (other than forgetting to add the cluster members to all the /etc/hosts file). But the only way I've gotten it to work properly on the latest version is manually hack everything. I...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!