root@inc1pve25:~# nmap -p 111,2049 172.19.2.183
Starting Nmap 7.70 ( https://nmap.org ) at 2020-07-11 07:18 UTC
Nmap scan report for inc1vfs3 (172.19.2.183)
Host is up (0.00010s latency).
PORT STATE SERVICE
111/tcp open rpcbind
2049/tcp open nfs
MAC Address...
10 mints later
2020-07-10 14:37:41.961802 mon.inc1pve25 [INF] Marking osd.44 out (has been down for 606 seconds)
2020-07-10 14:37:41.961822 mon.inc1pve25 [INF] Marking osd.45 out (has been down for 606 seconds)
2020-07-10 14:37:41.961831 mon.inc1pve25 [INF] Marking osd.46 out (has been down for...
Yes it is same
1 node down ( ie 4 OSD down) ==> around 10 seconds-no write
2 node down ( ie 8 OSD Down) ==> 10 mints no write, not able to login to VM's also
ceph status
cluster:
id: b020e833-3252-416a-b904-40bb4c97af5e
health: HEALTH_WARN
8 osds down...
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 83.83191 - 84 TiB 65 GiB 17 GiB 742 KiB 48 GiB 84 TiB 0.08 1.00 - root default
-3 6.98599 - 7.0 TiB 5.4 GiB 1.4 GiB 56 KiB...
CrushMap After Applying is like this, I have taken a new dump after applying
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable...
Yes thats understandable, now if i just remove the section choose_args from the crushmap, will it do the needful?
I will follow the procedure to apply again
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.