Search results

  1. R

    PVE 3.4 CEPH cluster - failed node recovery

    Yes that is what I meant. There were three single drive arrays in node 1. My attempt to remove an OSD resulted in an error pop up saying "Connection drror 595: No route to host" Just to reiterate - node 1 with three OSD's has failed and is offline. The only ceph warning is that one monitor is...
  2. R

    PVE 3.4 CEPH cluster - failed node recovery

    Before I go to the trouble of travelling to the colo at night for three days in a row to move OSD's perhaps someone can provide some clarification - With the cluster in this state (12 osd's: 9 up, 9 in and the GUI showing 3 osd's down/out) is the data contained on the Ceph SAN in danger of...
  3. R

    PVE 3.4 CEPH cluster - failed node recovery

    I will try moving one OSD either tonight or tomorrow night. But it may not work for another reason - the failed node was running on a Dell R620 with H710 controller which did not allow pass-thru control of the HD's. We had to create single drive RAID0 arrays. The other three nodes are Dell...
  4. R

    PVE 3.4 CEPH cluster - failed node recovery

    Yes, that's the book. It's probably difficult to write a book on technology that is changing so rapidly.
  5. R

    PVE 3.4 CEPH cluster - failed node recovery

    It says 3 OSD's down/out. I'm attaching a screenshot.
  6. R

    PVE 3.4 CEPH cluster - failed node recovery

    Thanks much for you quick response. At this point the server and associated three OSD's have been offline for three weeks. Everything seems healthy other than missing three OSD's. If I put the three OSD's into the other nodes, do I have to do anything to move them or do they get recognized...
  7. R

    PVE 3.4 CEPH cluster - failed node recovery

    Anyone out there besides me having this problem ? I'm still looking for a solution ??
  8. R

    PVE 3.4 CEPH cluster - failed node recovery

    Wolfgang - Thanks for your reference to that link. There are two sections on that page that may apply here. Remove a cluster node - - This looks like it will work for our case. Re-installing a cluster node - - In this case the node has failed and is not accessible for the purpose of copying...
  9. R

    PVE 3.4 CEPH cluster - failed node recovery

    I have a four node PVE / Ceph cluster with three OSD's on each. All nodes are licensed with PVE Community Subscription. One node has failed and must have PVE reinstalled. The cluster and all VM's are working fine on the remaining three nodes. Please describe the best method for replacing the...
  10. R

    Modify network without restarting

    In past versions (pre 3.4) I found it possible to modify network settings via cli by editing file /etc/network/interfaces and then restarting network services. But when I tried doing this on a new cluster (with community support license) running version 3.4-6 it did not work. This was useful...
  11. R

    Proxmox VE Ceph GUI Pools %

    The problem that I am having is on a new PM VE / Ceph cluster (version 3.4-6). When viewing the GUI -> PM2/Ceph/Pools under the used % column I see 0.00 although I have used 2.24TB which shows up in the status and under the Used % column of Ceph/OSD On another cluster with a little older...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!