proxmox 7.1-8
yesterday i executed a large delete operation on the ceph-fs pool (around 2 TB of data)
the operation ended withing few seconds successful (without any noticeable errors).
and then the following problem occurred:
7 out of 32 osds went to down and out.
trying to set them in and...
I have a 2 Node CEPH cluster for a Pool.
The data is in 2 replica mode.
I took down one of the node for maintenance whereas the other node was working.
Later there was a power outage which caused the second node to restart but the problem is it had RAID card caching enabled and the battery...
Hi guys,
I had a power outage that caused me to re-image one of the two servers in my ProxMox cluster. After re-imaging I wasn't able to get the monitors or managers to come up. I ended up wiping the monitors/managers.
I now have a new Ceph cluster with new monitors/managers but no pool(s)...
Hallo zusammen,
ich bin gerade am Testen Proxmox 4.4 HA mit Ceph Hammer. Das ganze im 3 Node Cluster. Derzeit bin ich bei Crash Tests. Da ich kein Ceph-Forum gefunden habe, probiere ich es mal hier.
Die Einstellungen des RBD sind 3/2 bzw. 3/1.
Dabei fiel auf: Wenn 2 der 3 Nodes down sind...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.