Recent content by ssaman

  1. ssaman

    Node aus CEPH und Cluster entfernen

    Hallo zusammen, wir sind gerade dabei unseren Cluster Stück für Stück abzubauen. Es ist ein 5 Node-Cluster. Davon wollen wir 2 entfernen. Bisher haben wir keine Erfahrung mit der Entfernung von Nodes, deshalb wollten wir auf Nummer sicher gehen und fragen hier nach. Wir würden wie folgt...
  2. ssaman

    Bandwidth very low - 2,3 MB/sec

    We had some issues with some OSDs before . They randomly dropped to "off". So we removed them and included them back to the Pool again.
  3. ssaman

    Bandwidth very low - 2,3 MB/sec

    We use a 10Gbps - exact model is Prosafe XS708E Our disks are from 3 different Manufacturers. node1 HUH721008AL5200 node2 HUS726060ALE610 node3 wd6002ffwx-68tz4n0 Some of them are connected directly to the Motherboard, the other ones are connected through a LSI MR 9250 4i (non-RAID). It...
  4. ssaman

    Bandwidth very low - 2,3 MB/sec

    Do you need anything else?
  5. ssaman

    Bandwidth very low - 2,3 MB/sec

    Which setting do you need?
  6. ssaman

    Bandwidth very low - 2,3 MB/sec

    Hello together, we have a big problem with our ceph configuration. Since 2 weeks the Bandwidth dropped extreme low. Has anybody an idea how we can fix this?
  7. ssaman

    slow requests are blocked - very slow VMs

    Currently, there is no slow requests anymore. Maybe CEPH balancer fixed our problem.
  8. ssaman

    slow requests are blocked - very slow VMs

    Hi all, since today, we have an issue with our proxmox / ceph. We already activated ceph balancer I hope someone can help us.
  9. ssaman

    [SOLVED] can't remove VM - storage does not exists

    Hello there, I am trying to remove a leftover Testing-VM. I get this message: We already removed this ceph storage. We don't know how to securly remove this VM. Maybe it would be enough to remove /etc/pve/qemu-server/<VMID>.conf Thank you and best regard.
  10. ssaman

    [SOLVED] Ceph Health - backfillfull / OSDs marked as out and down

    Thank you, Tim for the information. If anyone wanted to know how we fixed it right now. Like already mentioned before we have an LSI MegaRAID Controller MR9260-i4. This Controller isn't able to set a disk to JBOD-Mode. The workaround is to create a RAID0 with a single disk. We noticed that...
  11. ssaman

    [SOLVED] Ceph Health - backfillfull / OSDs marked as out and down

    We fixed the problem by removing the disks and reuse the same disk again. We don't know why Ceph throw out the OSDs even though the disks are still good. Currently, we struggle with an LSI MegaRAID Controller. We put back the same disk in the same slot from the RAID-Controller (RAID0 / single...
  12. ssaman

    [SOLVED] Ceph Health - backfillfull / OSDs marked as out and down

    No, the cluster wasn't near full. Current Usage: Maybe the problem is that we set our replica count to 3. And we have 4 OSDs each node. I have uploaded the log file. I hope it helps.
  13. ssaman

    [SOLVED] Ceph Health - backfillfull / OSDs marked as out and down

    There are some errors that I can't interpret or know how to fix like: /var/log/ceph/ceph-osd.11.log 0> 2019-05-09 13:03:43.860878 7fb79ab8e700 -1 /mnt/pve/store/tlamprecht/sources/ceph/ceph-12.2.12/src/os/bluestore/BlueStore.cc: In function 'void BlueStore::_kv_sync_thread()' thread...
  14. ssaman

    [SOLVED] Ceph Health - backfillfull / OSDs marked as out and down

    Hello proxmox community, today we noticed our heath error with the message: HEALTH_ERR 1 backfillfull osd(s); 1 nearfull osd(s); 1 pool(s) backfillfull; Degraded data redundancy: 99961/8029671 objects degraded (1.245%), 19 pgs degraded, 19 pgs undersized; Degraded data redundancy (low space)...