Yes that is what I meant. There were three single drive arrays in node 1.
My attempt to remove an OSD resulted in an error pop up saying "Connection drror 595: No route to host"
Just to reiterate - node 1 with three OSD's has failed and is offline.
The only ceph warning is that one monitor is...
Before I go to the trouble of travelling to the colo at night for three days in a row to move OSD's perhaps someone can provide some clarification - With the cluster in this state (12 osd's: 9 up, 9 in and the GUI showing 3 osd's down/out) is the data contained on the Ceph SAN in danger of...
I will try moving one OSD either tonight or tomorrow night. But it may not work for another reason - the failed node was running on a Dell R620 with H710 controller which did not allow pass-thru control of the HD's. We had to create single drive RAID0 arrays. The other three nodes are Dell...
Thanks much for you quick response.
At this point the server and associated three OSD's have been offline for three weeks.
Everything seems healthy other than missing three OSD's.
If I put the three OSD's into the other nodes, do I have to do anything to move them or do they get recognized...
Wolfgang -
Thanks for your reference to that link.
There are two sections on that page that may apply here.
Remove a cluster node -
- This looks like it will work for our case.
Re-installing a cluster node -
- In this case the node has failed and is not accessible for the purpose of copying...
I have a four node PVE / Ceph cluster with three OSD's on each. All nodes are licensed with PVE Community Subscription.
One node has failed and must have PVE reinstalled.
The cluster and all VM's are working fine on the remaining three nodes.
Please describe the best method for replacing the...
In past versions (pre 3.4) I found it possible to modify network settings via cli by editing file /etc/network/interfaces and then restarting network services. But when I tried doing this on a new cluster (with community support license) running version 3.4-6 it did not work. This was useful...
The problem that I am having is on a new PM VE / Ceph cluster (version 3.4-6).
When viewing the GUI -> PM2/Ceph/Pools under the used % column I see 0.00 although I have used 2.24TB which shows up in the status and under the Used % column of Ceph/OSD
On another cluster with a little older...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.