The recommendation for CEPH is to have CEPH backend traffic on a separate network, also corosync traffic should be ideally on a separate network
LACP Bonding over 4 ports would probably not have the expected effect, as the bandwith between two dedicated nodes would not exceed 10G.
So I would...
Are these OSD's which you created before Nautilus with ceph-disk?
did follow this step in the upgrade to Nautilus:
ceph-volume siple scan /dev/...
and the activate it?
So you have built a full mesh?
Server 1 port 1 to Server 2 port2
Server 2 port1 to Server 3 Port2
Server 3 Port 1 back to Server 1 Port2
or similar
If the bridge would send out on both ports you would have a loop blowing up traffic completely. What the OVS Bridge does (and some switch can do...
It is intentional, that CEPH does not fill up all available bandwith during recovery/rebalancing. If you want to speed it up:
You can also set these values if you want a quick recovery for your cluster, helping OSDs to perform recovery faster.
osd max backfills: This is the maximum number of...
Aber vorher das RAID im BIOS auflösen ! Sonst kracht es an unerwarteten Stellen.
Das ist wahrscheinlich ein Intel S-ata mit dem LSI Fake Raid code? Da kann ich nur sagen Finger weg davon und die Platten einzeln durchreichen und dann ZFS darauf laufen lassen.
you can usually put S-ATA SSD in a SAS Shelf as long as you do not need the second channel (you do not have a second head).
Of course you could also use SAS SSD's .
Again -> keep fingers from SD Card's in a production server they are good for camera equipments but not for servers. We see also...
I would strongly advise against using SD-Card for system disks.
Reason: SD-Cards do not like heavy writing, they will die very early. They are just usable for setups with rare writes. At least you need to put logfiles on another medium. So why not using an adequate S-ATA SSD ? They do not cost...
1 -> Build corosync Cluster Network
2 -> Install ceph on all Nodes!
3 -> create Mon's (I would recommend them on all 3 nodes)
4 -> Create OSD's on node 2 + 3 (you get degraded PG's as long as node 1 has no OSD's, but thats ok for migration time)
5 -> Create Pool for Images
6 -> Migrate VM...
This sounds like sdd and sdc were not added as a mirror device, instead as single devices - really a bad setup
So there is no chance, the data is dead meat at is spread over all vdev's
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.