hi,
i would'nt state me a guru but what i figured is, 10g in a high IO intense setup is a must. as you got.
so what was your issues you run into before givin up?
managed to change it.
but. not easy to do it.
ceph mon getmap -o tmpfile
monmaptool --print tmpfile
monmaptool --rm 0 --rm 1 --rm 2 tmpfile
monmaptool --add 1 10.255.247.13 --add 0 10.255.247.15 --add 2 10.255.247.16 tmpfile
then stop the monitors
and reload the monmap into the ceph...
hi,
i've installed ceph with 3 nodes, after doing some tests i figured out that i'd like to activate the performance a increasement i'd get when i add a third network device.
How can i change the public network of ceph?
changeing only the public network in ceph.conf isn't it right?
kind regrads
hi,
thanks a lot for your explanation. so i was right - loosing the journal will end up in a total corruption of the node (hoped that i was not right ;) )
in fact that my wiring is the bottleneck for my little cluster i'll leave the journal out of the system and run without it. there should not...
got a three node cluster, with 4x3TB, 128GB RAM, 2x128GB SSDs, 128GBR RAM, NoRAID-Controller.
Proxmox is setup on top off Debian-SW-RAID-Setup.
Reason why i do it this way, quite simple. SW-Raid is in my opinion the cheapest way to get dataprotection.
This setup should provide me the...
for ha purposes i'd say use a hardware raid1 for ssd and (if possible) add some 2 additional sata-harddrives into the server.
use 2 of the harddrives for ceph and the raid1-ssd array for journal on these drives. create a pool with a replica of three you will get 18TB of usable Storage, thats...
virtio delivers the best performance on io what you could expect out of your hardware. Virtio creates multiple IO-Threads not just a single one like IDE.
Hi,
i was wondering if it is possible to add an ssd-sw-raid device as journal for ceph?
Adding it manually with pveceph works, but then the osd is not shown up in the osd-list from the webinterface and the device is marked as down.
i know it is not recommended but in the other case just a...
ok - got it. i'll change the setup into
2x 500GB ZFS RAID1 for System
and the other HardDrives as dedicated for ceph.
But what about adding another node after creating the ceph-cluster. will this replicate the data to the new joining node?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.