Search results

  1. D

    Proxmox CEPH multiple storage classes questions

    Questions related to leveraging an SSD to back the HDD's: If I'm going to leverage a single SSD to back multiple HDD's, what SSD to HDD ratio would you recommend for HDD's likely being 8TB (SATA connected) and SSD's likely being ~3.5TB (SATA connected). Am I correct to guess that if I plan to...
  2. D

    Proxmox CEPH multiple storage classes questions

    Alwin, Thank you for your quick response and clarification. Much appreciated. I think what I was trying to ask with the "proxmox implementation" (although poorly worded on my part) was if there were specific ways I needed to do things mainly related to the gui (as I know not all things can be...
  3. D

    Proxmox CEPH multiple storage classes questions

    Hey there, I currently have an existing 10 node, Proxmox 5.4-5 and CEPH luminous configuration. Each of the 10 nodes are running Proxmox and CEPH side by side (meaning, I have VM's running on the same nodes as are serving the RBD pool that they are running from. Each node has a number of SSD's...
  4. D

    Duplicate /etc/logrotate.d/ceph-common files

    Any update on this? It's been over a year, I'm fairly current on Luminous (having upgraded from Jewel) and still get these errors daily...
  5. D

    [SOLVED] Proxmox Ceph implementation seems to be broken (OSD Creation)

    @Alwin A bit confused by your response in the ticket...this does seem like a bug on Proxmox's side.... <D>
  6. D

    Monitoring Ceph Cluster with Prometheus and Grafana

    Just want to throw this out there that this is linked to the "older way" of monitoring CEPH in Prometheus. It works but is more complicated than it needs to be (I'm figuring this out now). As of Luminous, there is a Prometheus plugin module that you simply enable on your cluster and point your...
  7. D

    Corrupt Filesystem after snapshot

    /SMH..... of course you are right.... =) <D>
  8. D

    Corrupt Filesystem after snapshot

    What storage protocols are in use, Paolo? NFS, iSCSI, RBD? I can confirm that there seems to be three things in play when I experience this failure: 1) The machine is running under load 2) It's on a storage device that is mounted with NFS or RBD 3) It's when we snapshot RAM I was speaking to...
  9. D

    Ceph - Balancing OSD distribution (new in Luminous)

    I ran through these settings yesterday and it worked great. The earlier point about not being able to run upmap because the Proxmox version of the CEPH client was still Jewel seems to have changed...as when I run ceph features I only see luminous listed in the client section. Can someone...
  10. D

    Ceph - Balancing OSD distribution (new in Luminous)

    Ok, so both of these lines do the exact same thing, which is setting the default percentage difference. Got it. Thanks!
  11. D

    Ceph - Balancing OSD distribution (new in Luminous)

    Another question: What is the difference between the line you suggest: ceph config-key set "mgr/balancer/max_misplaced": "0.01" and the one in the documentation: ceph config set mgr mgr/balancer/max_misplaced .07 # 7% Are they both doing the same thing?
  12. D

    Ceph - Balancing OSD distribution (new in Luminous)

    Also, a question: It appears you run this as a manual task...in the CEPH documentation under Balancer Plugin it appears that it "runs automatically" if it's enabled and using the ceph balancer on. From experience is it better to run it once in a while manually like you are doing or just leave...
  13. D

    Ceph - Balancing OSD distribution (new in Luminous)

    BTW: I just ran CEPH features and under client mine is showing luminous. I upgraded from 4.x to 5.2-9 this weekend....so based on this it appears that Proxmox upgraded their client and we can now use upmap...correct?
  14. D

    qcow2 corruption after snapshot or heavy disk I/O

    Just updating this thread....seems like there is an update/patch/possible solution here: https://forum.proxmox.com/threads/corrupt-filesystem-after-snapshot.32232/page-3
  15. D

    Ceph - Balancing OSD distribution (new in Luminous)

    David, A big THANK YOU for posting this info here. I just upgraded my environment this past weekend from 4.x to 5.2-9 and one of the motivators to move to Luminous was to be able to rebalance my OSD's. I have large percentage skews. I'll follow your instructions! Looking forward to having a...
  16. D

    Corrupt Filesystem after snapshot

    Waiting to see how this turns out... would be great to not end up with a 30PB file after a failed snapshot. We were able to repro every time by putting the VM under heavy load and attempting a live snapshot.
  17. D

    Ceph - Balancing OSD distribution (new in Luminous)

    I just upgraded from Jewel to Luminous and was wondering if this is still relevant. I see that all of my nodes are currently configured with "alg straw" so it appears to still be the case... Thanks! Dan