Greetings to all,
I am seeking assistance with a challenging issue related to Ceph that has significantly impacted the company I work for.
Our company has been operating a cluster with three nodes hosted in a data center for over 10 years. This production environment runs on Proxmox (version...
Hi,
I read this sentence on ceph hardware-recommendations : "Provision at least 10 Gb/s networking in your datacenter, both among Ceph hosts and between clients and your Ceph cluster"
This is my ceph configuration:
[global]
auth_client_required = cephx
auth_cluster_required = cephx...
So currently I have 3 nodes, 3x16x18TB HDDs, in a Ceph cluster running normally. Today i went to go add 2 more nodes with 2x12x12TB drives.
All was fine untill I went to go add the OSDs into ceph. I set the NoRecover flag but forgot to set the Noout and the Norebalance flag.
The cluster failed...
Hi,
We are running a 5-node proxmox ceph cluster. Among them 3 of them are having ssd drives which is making a pool called ceph-ssd-pool1. Following are the configuration:
Ceph Network: 10G
SSD drives are of: Kingston SEDC500M/1920G (Which they call it as Datacenter grade SSDs and claiming to...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.