You should switch it then as I mentioned above, its simple...
Change ceph.conf and do
systemctl stop ceph\*.service ceph\*.target
systemctl start ceph.target
TP-LINK T1700G 28TQ is what I have, eyeing the cheap Microtiks for SFP+ 10gbit 5 ports as the next buy to tie the hobby room with my house.
- Mon, mgr and proxmox https all should reside on network .10 - ie the PUBLIC network of 1gb.
- CLUSTER is your SAN where OSDs speak to each other, as well as ring1 in the corosync for HA migrations on .7 - ie the CLUSTER network of 10gb.
But getting a switch WILL make life much easier for you...
https://github.com/fulgerul/ceph_proxmox_scripts/blob/master/new_node_install.sh
TLDR;
sed -i 's/.*AcceptEnv LANG LC_\*.*/AcceptEnv LANG LC_PVE_* # Fix for perl: warning: Setting locale failed./' /etc/ssh/sshd_config
service ssh reload
# exit; Reconnect
You need 3 mon and 3 managers minimum for everything to work transparently.
If you have 2 of each like you have now, there will be no majority quorum when its time to vote for a new mon/mgr leader.
They all need to be on the same network, this can be achieved with VPN if the third node is...
Hi,
The corosync network is on the 10gb NIC with 172.16 as IP. The public network is the 1gb one on 192.168 net.
I had no idea that it was both normal and expected for a node to magically reboot itself? Because the way I see it, the network worked just fine, it was just corosync that messed up...
I hate to be that guy, but I told you so :D
You never mentioned the NUC so I am guessing this is the third node, yes ? If yes then a monitor will be needed on this one, which basically means when one of the nodes (votes) go down, the other 2 monitors can vote for a new monitor to be "leader"...
I understand. Since I am a bit cheap still until my proxmox/ceph NAS is done and ready I don't have a subscription (yet), so I wont be able to answer your question..
Never done offline install of .debs, but it sounds like you have a good grasp of that. Check the below.
You need to swap out the Enterprise repo to the free one like so:
# Swap to free distroupgrade
echo -e "deb http://ftp.se.debian.org/debian stretch main contrib\n\n# security updates\ndeb...
I really look forward for this!!
"- A lot of x86_64 KVM work including STIBP support, Processor Tracing virtualization, new Intel Icelake CPU instruction set extensions support, and other work."
"Intel VT-d Scalable Mode"
That is strange, as the traffic to and from that node has worked according to the RRD graphs. And the other nodes on the same switch worked just fine. So how do I go from here ? Should I replace my NIC ?
It is correct, it will work, no guarantees when it comes to the ceph replica x2 tho! You have been warned!
Dont forget to put in small OS SSDs for the 2 main hosts as well!
Yes, this gives you about 50% of usable disk space.
Hi,
I have a node that all of the sudden begun acting up, the memory pressure is high on all 3 nodes (90%+) since newest luminous but I have 2 other identical nodes that dont have this issue and everything is working perfectly which I attribute to ceph reserving memory for future use(?).
So I...
It is possible yes, not recommended at all due to the above..... but possible. Just remember to stick an SSD in that third node tho, else the swapiness monster will eat them spinning rust :D
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.