Hello,
I am encountering this same issue. Here is my architecture :
3 Nodes, 3 Ceph but only 2 nodes hosting 2 OSD each. I have the exact symptoms described above and I guess that your fix should suit for me but first I like to be sure that it is correct ...
I'm not sure to follow you; why NUC cannot see the cluster ? For me, the NUC is fully integrated to the cluster now, do I miss something ?
So in my case, I have this in the corosync config :
totem {
cluster_name: HomeLab
config_version: 3
interface {
bindnetaddr: 192.168.7.20...
You talked about the totem magic, is that related to this KB ? https://pve.proxmox.com/wiki/Cluster_Manager#_cluster_network
Talking about "Separate After Cluster Creation" with this example ??
totem {
cluster_name: thomas-testcluster
config_version: 3
ip_version: ipv4
secauth: on...
Super ! Many many thanks again for all your answers. This was really helpful.
Now I will look into how to present network interfaces in different subnet without binding IP on each.
Cheers !
I thank you a lot for your time a patience AlexLup.
I have started over the Ceph confirugation. I ran a "pveceph purge" and re-created the ceph cluster and initialized it on the PUBLIC network this time so I have now the three nodes. See the new configuration :
[global]
auth client...
This is nice.
I tried to modify the Ceph config but now everyhing is broken so I have to figure this out. But no more Ceph storage :) hopefully I only have a test machine on it.
I have build the corosync cluster on .7 network. It's still a LAN / private network. .10 Network is exclusively used by Ceph setup.
I also think that having an additionnal switch will help. I choose a GS110MX which has 2x10GB and 8x1GB. What was the model you choose ?
Yeah but the thing is that both first nodes are connected togheter with a direct cable so it's impossible to put the NUC between... except with a switch :( So I definitively need a switch. I should have one tomorrow.
Many thanks !
I fully understand.
Just to be very clear :
- All three nodes have cluster IP in network 192.168.7.0/24. This is where Proxmox cluster setup has been done.
- Two of three nodes have Ceph mon / initialization in network 192.168.10.0/24. This is non-switched network as there is a cross cable...
Hi Alexlup,
Of course I mentionned 3 message above :P No worries... In fact my cluster has well 3 nodes now but the only thing is that when one of the two nodes having Ceph deployed, the other node lost access to the storage pool and is able to see it and restart the VM only after the other...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.