i have like 18 Cluster server they all shared all firewall rules all ok expect one node once open the firewall all connection drops and in the logs
Oct 4 05:31:29 xx kernel: [250319.678513] nf_conntrack: nf_conntrack: table full, dropping packet
Oct 4 05:31:29 xx kernel: [250319.678799]...
Hello,
It seems there is bug with Cloud-init root password. I understand that Cloud-init works with SSH Key access but either normal root access We setup template and active the root access however it never works it request to login by ssh key then set the root access by using "passwd "
The...
After few hours of search i figuered something not sure if it really works. now how this look
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable...
I followed https://ceph.com/community/new-luminous-crush-device-classes/
I add the rules and seems fine but not sure why the Ceph start replicated the HD data to the NVMe as well
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries...
Hello,
Do have one KVM with 5tb ceph raw file with Cloud-init active the issue that system are on one partitions vda1
i can see it 2TB only
root@xxx:~# df -H
Filesystem Size Used Avail Use% Mounted on
udev 511M 0 511M 0% /dev
tmpfs 105M 5.1M 100M 5% /run...
Hello,
If i do use NVME for promox os in ceph without any raid anf if this drive failed i will loss everything on that node? Even the OSD and journal? Or if i got it replaced and reinstall fresh os and then join to cluster again all OSD will be there? And available?
Side note : the NVME for...
it seems quit strange that i got better performance with 2 x 6tb sata only while using Filestore instead of bluestore?!
rados bench -p test 60 write --no-cleanup
Total time run: 62.803659
Total writes made: 1696
Write size: 4194304
Object size: 4194304...
i'm using Mellanox SX6025 Non-blocking Unmanaged 56Gb/s SDN Switch, not sure if this even will works if increases the MTU? i'm using
mtu 65520 is there is anyway to increases it? if yes to how much?
Hello,
i'm doing Ceph not sure if this the best speed with my configuration? using 3 OSD with 5TB enterprise HardDrive and NVM P3700 Bluestore Journal/DB Disk, My concern that need a lot of space with lot of speed as well so if i add more 5tb version will the speed up? or add more journal...
Do some test
Dual E5-2660
75 GB RAM
SM863 OS Host
Dual port Mellanox 56Gb/s
3 x OSD 5TB Hard drive Per server 9 total OSD
1 x P3700 Journal per node 3 total
osd commit_latency(ms) apply_latency(ms)
8 65 65
7 74 74
6...
I was able to set the Mellanox Dual port 54Gb/s port FDR card up however able to do without bonding
root@c18:~# rados -p test bench 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix...
I would use different subnet in eth4 or eth5 bot even both when i using bonding otherwise the network will not fully up the boding ips not ping between nodes. now i use one nic port without boding and here is the test.
root@ceph4:~# rados bench -p test 60 seq
hints = 1
sec Cur ops started...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.