Hallo zusammen,
wir sind gerade dabei unseren Cluster Stück für Stück abzubauen.
Es ist ein 5 Node-Cluster. Davon wollen wir 2 entfernen.
Bisher haben wir keine Erfahrung mit der Entfernung von Nodes, deshalb wollten wir auf Nummer sicher gehen und fragen hier nach.
Wir würden wie folgt...
We use a 10Gbps - exact model is Prosafe XS708E
Our disks are from 3 different Manufacturers.
node1
HUH721008AL5200
node2
HUS726060ALE610
node3
wd6002ffwx-68tz4n0
Some of them are connected directly to the Motherboard, the other ones are connected through a LSI MR 9250 4i (non-RAID).
It...
Hello together,
we have a big problem with our ceph configuration.
Since 2 weeks the Bandwidth dropped extreme low.
Has anybody an idea how we can fix this?
Hello there,
I am trying to remove a leftover Testing-VM.
I get this message:
We already removed this ceph storage.
We don't know how to securly remove this VM.
Maybe it would be enough to remove
/etc/pve/qemu-server/<VMID>.conf
Thank you and best regard.
Thank you, Tim for the information.
If anyone wanted to know how we fixed it right now.
Like already mentioned before we have an LSI MegaRAID Controller MR9260-i4. This Controller isn't able to set a disk to JBOD-Mode.
The workaround is to create a RAID0 with a single disk.
We noticed that...
We fixed the problem by removing the disks and reuse the same disk again. We don't know why Ceph throw out the OSDs even though the disks are still good.
Currently, we struggle with an LSI MegaRAID Controller.
We put back the same disk in the same slot from the RAID-Controller (RAID0 / single...
No, the cluster wasn't near full.
Current Usage:
Maybe the problem is that we set our replica count to 3.
And we have 4 OSDs each node.
I have uploaded the log file. I hope it helps.
There are some errors that I can't interpret or know how to fix like:
/var/log/ceph/ceph-osd.11.log
0> 2019-05-09 13:03:43.860878 7fb79ab8e700 -1 /mnt/pve/store/tlamprecht/sources/ceph/ceph-12.2.12/src/os/bluestore/BlueStore.cc: In function 'void BlueStore::_kv_sync_thread()' thread...
Yes, ceph health getting better.
Since Yesterday:
# ceph health
HEALTH_WARN 1987253/8010258 objects misplaced (24.809%); Degraded data redundancy: 970715/8010258 objects degraded (12.118%), 187 pgs degraded, 187 pgs undersized
less misplaced and degraded data.
Edit:
Do you mean we just have to...
The old pool had "min_size 1" because it was a temporary pool to add VMs from an old cluster.
Yes, we added node2 and node3 to the cluster and also added the disks(OSDs) on the nodes to the new pool.
added OSDs on node1
created new pool (old pool)
created VMs on node1
addes node2
created new pool with current settings
moved disks from old pool to current pool
removed unused disks (old pool) over GUI except 2 VMs
destoreyed old pool
removed last unused disks (from the 2 VMs bevor) from old...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.