Thank you for your reply. I checked all services and ceph network access in all running servers and all are running normal. If you can give me more details about which services maybe down then I can able to find it more accurately.
yesterday ceph status changed from warning to error as how in attached image. Do I have to wait only
ceph -s
cluster:
id: 17fc003a-208b-4c20-82e2-c59307bd8334
health: HEALTH_ERR
1 scrub errors
Reduced data availability: 137 pgs inactive, 100 pgs down...
Last week my promox rack had problem with power supply. 5 servers mainboard damaged. I fixed the 3 servers but 2 servers are not back to online. Ceph storage show warning as below:
ceph -s
cluster:
id: 17fc003a-208b-4c20-82e2-c59307bd8334
health: HEALTH_WARN
Reduced data availability: 137...
Dear Support Team,
I have 3 Proxmox servers in cluster and each servers have 1 local hard disk for VMguests.
PVE1 - Disk1 = 300 GB
PVE2- Disk 2 = 1 TB
PVE3-Disk 3 =300 GB
I follow this document to config the Volume Group...
Thank you so much for your information. Can I reduce the default size = 3 to 2. If yes then is there any problem occur in future. As I understand this value 3 means 3 copies of 1 VM, how about if I want to keep 2 copies of 1 VM.
I am running 2 proxmox clusters and in one cluster I did upgrade from 5.1 to 5.2. After upgrade I found that my ceph storage show wrong information. Ceph over all shows total storage of 19.97 TB where as Ceph-vm shows only 4.58 TB.
Before upgrade total storage and Ceph-vm storage show same...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.