Search results

  1. ssaman

    Node aus CEPH und Cluster entfernen

    Hallo zusammen, wir sind gerade dabei unseren Cluster Stück für Stück abzubauen. Es ist ein 5 Node-Cluster. Davon wollen wir 2 entfernen. Bisher haben wir keine Erfahrung mit der Entfernung von Nodes, deshalb wollten wir auf Nummer sicher gehen und fragen hier nach. Wir würden wie folgt...
  2. ssaman

    Bandwidth very low - 2,3 MB/sec

    We had some issues with some OSDs before . They randomly dropped to "off". So we removed them and included them back to the Pool again.
  3. ssaman

    Bandwidth very low - 2,3 MB/sec

    We use a 10Gbps - exact model is Prosafe XS708E Our disks are from 3 different Manufacturers. node1 HUH721008AL5200 node2 HUS726060ALE610 node3 wd6002ffwx-68tz4n0 Some of them are connected directly to the Motherboard, the other ones are connected through a LSI MR 9250 4i (non-RAID). It...
  4. ssaman

    Bandwidth very low - 2,3 MB/sec

    Do you need anything else?
  5. ssaman

    Bandwidth very low - 2,3 MB/sec

    Which setting do you need?
  6. ssaman

    Bandwidth very low - 2,3 MB/sec

    Hello together, we have a big problem with our ceph configuration. Since 2 weeks the Bandwidth dropped extreme low. Has anybody an idea how we can fix this?
  7. ssaman

    slow requests are blocked - very slow VMs

    Currently, there is no slow requests anymore. Maybe CEPH balancer fixed our problem.
  8. ssaman

    slow requests are blocked - very slow VMs

    Hi all, since today, we have an issue with our proxmox / ceph. We already activated ceph balancer I hope someone can help us.
  9. ssaman

    [SOLVED] can't remove VM - storage does not exists

    Hello there, I am trying to remove a leftover Testing-VM. I get this message: We already removed this ceph storage. We don't know how to securly remove this VM. Maybe it would be enough to remove /etc/pve/qemu-server/<VMID>.conf Thank you and best regard.
  10. ssaman

    [SOLVED] Ceph Health - backfillfull / OSDs marked as out and down

    Thank you, Tim for the information. If anyone wanted to know how we fixed it right now. Like already mentioned before we have an LSI MegaRAID Controller MR9260-i4. This Controller isn't able to set a disk to JBOD-Mode. The workaround is to create a RAID0 with a single disk. We noticed that...
  11. ssaman

    [SOLVED] Ceph Health - backfillfull / OSDs marked as out and down

    We fixed the problem by removing the disks and reuse the same disk again. We don't know why Ceph throw out the OSDs even though the disks are still good. Currently, we struggle with an LSI MegaRAID Controller. We put back the same disk in the same slot from the RAID-Controller (RAID0 / single...
  12. ssaman

    [SOLVED] Ceph Health - backfillfull / OSDs marked as out and down

    No, the cluster wasn't near full. Current Usage: Maybe the problem is that we set our replica count to 3. And we have 4 OSDs each node. I have uploaded the log file. I hope it helps.
  13. ssaman

    [SOLVED] Ceph Health - backfillfull / OSDs marked as out and down

    There are some errors that I can't interpret or know how to fix like: /var/log/ceph/ceph-osd.11.log 0> 2019-05-09 13:03:43.860878 7fb79ab8e700 -1 /mnt/pve/store/tlamprecht/sources/ceph/ceph-12.2.12/src/os/bluestore/BlueStore.cc: In function 'void BlueStore::_kv_sync_thread()' thread...
  14. ssaman

    [SOLVED] Ceph Health - backfillfull / OSDs marked as out and down

    Hello proxmox community, today we noticed our heath error with the message: HEALTH_ERR 1 backfillfull osd(s); 1 nearfull osd(s); 1 pool(s) backfillfull; Degraded data redundancy: 99961/8029671 objects degraded (1.245%), 19 pgs degraded, 19 pgs undersized; Degraded data redundancy (low space)...
  15. ssaman

    [SOLVED] Ceph Health Warning

    Yes, ceph health getting better. Since Yesterday: # ceph health HEALTH_WARN 1987253/8010258 objects misplaced (24.809%); Degraded data redundancy: 970715/8010258 objects degraded (12.118%), 187 pgs degraded, 187 pgs undersized less misplaced and degraded data. Edit: Do you mean we just have to...
  16. ssaman

    [SOLVED] Ceph Health Warning

    The old pool had "min_size 1" because it was a temporary pool to add VMs from an old cluster. Yes, we added node2 and node3 to the cluster and also added the disks(OSDs) on the nodes to the new pool.
  17. ssaman

    [SOLVED] Ceph Health Warning

    added OSDs on node1 created new pool (old pool) created VMs on node1 addes node2 created new pool with current settings moved disks from old pool to current pool removed unused disks (old pool) over GUI except 2 VMs destoreyed old pool removed last unused disks (from the 2 VMs bevor) from old...
  18. ssaman

    [SOLVED] Ceph Health Warning

    We moved the disks via Proxmox GUI. And than removed/destroyed the old disks/pool over the GUI.
  19. ssaman

    [SOLVED] Ceph Health Warning

    We use replicated_hdd for the pool

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!