ceph hammer

  1. J

    Journals missing or in the wrong place?

    Good morning all, On each of my Ceph nodes I have 2 SSD's for journals. /dev/sdj & /dev/sdk While upgrading from Hammer -> Jewel I noticed something that I think is odd, but I'm not sure. It appears that some of my OSD's either may not have journals, or the journal is not set to one of the...
  2. stefws

    patched from 3.4.15 to 3.4.16, now ceph 0.94.9 fails to start

    Got an older 7x node 3.4 testlab (running Ceph Hammer 0.94.9 on 4x of the nodes and only VMs on 3x nodes), which we wanted to patch up today, but after rebooting our OSD won't start, seems ceph can't connect to ceph cluster. Wondering why that might be? Previous version before patching...
  3. A

    Ceph monitors’ and OSDs’ daemons doesn’t come up.

    We have created two-nodes Proxmox v4.4 cluster with Ceph Hammer pool running on the same nodes. About 6 weeks it has been working as expected, but today we were facing continuous local blackout in our office, and both cluster nodes were powered off accidentally and unexpectedly. After this...
  4. G

    [SOLVED] CephFS ungleiche Datenverteilung

    Hallo zusammen, ich hoffe mir kann hier jemand bei CephFS weiter helfen: Wie oben geschrieben habe ich das Problem, dass in meinem Proxmox Cluster mit CephFS die Daten auf den OSDs sehr ungleich verteilt werden. Im folgenden das Setup: Aktuell 4 Server (5. ist in Planung). Jeder Server hat...
  5. L

    Proxmox 4.2 & CEPH Hammer, create OSD failed

    Dear Community, I have been using Proxmox for many years, and our DC grew. So I decided to get CEPH implemented to have a better density on my Proxmox nodes. Currently wer are running 7 Nodes, and on 4 Nodes i have installed CEPH, ecah of thos 4 Nodes has 2 OSDs, the journal device is one SSD...
  6. S

    Ceph cache tier and disk resizing

    Hello, i'm currently running a ceph cluster (Hammer), last weekend I implemented a cache tier (writeback mode) of SSDs for better performance. Everything seems fine except for disk resizing. I have a Windows VM with a raw RBD disk, i powered off the VM, resized the disk, verified that both ceph...
  7. CTCcloud

    Ceph on IPv6 - pveceph creatmon not working

    This cluster is working over IPv6 and the Ceph install is intended to be entirely IPv6 Deploying ceph with pveceph install -version hammer works just fine on all nodes When deploying the first monitor, using pveceph createmon It reaches the the end with "Starting ceph-create-keys on "node...
  8. N

    PVE Ceph High IO and CPU Utilization

    Hello Proxmox community. I'm posting here today as I'm out of ideas on a issue I'm having. I'm currently running PVE 4.1-1/2f9650d4 in a 3 node cluster env with PVECEPH -hammer. Every now and then the node CPU will go to 99.9% then another node will follow etc, during this time quorum will...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!