Search results

  1. G

    Regular errors on ceph pg's!

    Hi! Currently I transferred a deep_scrub to non-production hours (Only backup all VMs & CTs to the server placed outside the cluster): debug ms = 0/0 osd scrub begin hour = 0 osd scrub end hour = 8 osd scrub sleep = 0.1 All day "cluster [INF] overall HEALTH_OK" and errors of deep scrubbing at...
  2. G

    Regular errors on ceph pg's!

    Reducing the cache in half did not help. :( Gosha
  3. G

    Regular errors on ceph pg's!

    Hi! I will try this advice in the evening. Gosha
  4. G

    Regular errors on ceph pg's!

    Again.... 2018-06-19 14:00:00.000210 mon.cn1 mon.0 192.168.110.1:6789/0 11002 : cluster [INF] overall HEALTH_OK 2018-06-19 15:00:00.000267 mon.cn1 mon.0 192.168.110.1:6789/0 11294 : cluster [INF] overall HEALTH_OK 2018-06-19 16:00:00.000298 mon.cn1 mon.0 192.168.110.1:6789/0 11573 : cluster...
  5. G

    Regular errors on ceph pg's!

    These controllers have the ability to set this parameter. Now the parameter is set to Auto. Change to 256? Or 128? I understand this, but there is no such possibility, unfortunately. :(
  6. G

    Regular errors on ceph pg's!

    Hi! It looks very similar. But I did not see a solution to this problem. :( Gosha
  7. G

    Regular errors on ceph pg's!

    This is a very low-budget organization. :(
  8. G

    [SOLVED] noVNC console don't work

    Hi! Yes! Clearing the cache and everything worked! :) Thanks!
  9. G

    [SOLVED] noVNC console don't work

    Hi! Trying to open any noVNC-console: Gosha
  10. G

    Regular errors on ceph pg's!

    Here are the drive models: 1TB - model MB1000GCWCV Firmware Version HPGH 2TB - model MB2000GFDSH Firmware Version HPG2 Gosha
  11. G

    Regular errors on ceph pg's!

    Ok All my servers (three ProLiant DL380 Gen8 and one DL160 Gen8) are equipped with Smart Array P420 Controllers. These controllers do not have the ability switch to the HBA mode. :( And I just did this: ceph tell osd.* injectargs '--debug_ms 1/5'; osd.0: debug_ms=1/5 osd.1: debug_ms=1/5...
  12. G

    Regular errors on ceph pg's!

    Hi! I now did as described in this topic: https://forum.proxmox.com/threads/ceph-schedule-deep-scrubs-to-prevent-service-degradation.38499/ Set the schedule (crontab) for deep scrub during non-production hours only. Let's see what happens.
  13. G

    Regular errors on ceph pg's!

    Hi! 1. All servers use HDD (HP) 1TB (on on cn1, cn2, cn3 nodes) and 2TB (on cn4 node only). 2. iLO4 on all servers show the status of all disks as OK. See picture for example: 1TB disks - about 3 years. 2TB disks - about 1 year. The latest errors pointed to the OSD on these disks (2TB)...
  14. G

    Regular errors on ceph pg's!

    I just checked the SMART information of all the RAID controllers. Errors were not found. All drives work fine. :(
  15. G

    Regular errors on ceph pg's!

    If I try to disable the cache in RAID-controller, this will not destroy the data on the disks? Is the performance of the CEPH-storage reduced? I have never tried to do so. Added later: Oops! In all servers, the write cache option for RAID is already disabled! The question is canceled.
  16. G

    Regular errors on ceph pg's!

    Yes, my servers are equipped with RAID controllers. But they do not have the ability to use the HBA-mode. I use RAID-0 for each disk (OSD). And NO, each time it happens on different OSDs installed in different cluster nodes. Could this be related to a RAID 1GB-cache? But why in the old version...
  17. G

    Regular errors on ceph pg's!

    So far I periodically run this script for repair: ceph pg dump | grep -i incons | cut -f1 -d" " | while read i; do ceph pg repair ${i} ; done
  18. G

    Regular errors on ceph pg's!

    This is new install via debian install method with restore VMs and CTs from backups. All disks SATA HDD like this: I did not make snapshots. Gosha
  19. G

    Regular errors on ceph pg's!

    While writing this again happened! Gosha.
  20. G

    Regular errors on ceph pg's!

    Hi! I very much regret that I upgraded (via new install and restore all VMs and CTs from backup) my cluster from 4.x to 5.2! A new storage Bluestore is driving me crazy! I now do not sleep at night, trying to understand why the same disks worked fine in the old version CEPH, and in the new...