Hello,
I am encountering this same issue. Here is my architecture :
3 Nodes, 3 Ceph but only 2 nodes hosting 2 OSD each. I have the exact symptoms described above and I guess that your fix should suit for me but first I like to be sure that it is correct ...
I'm not sure to follow you; why NUC cannot see the cluster ? For me, the NUC is fully integrated to the cluster now, do I miss something ?
So in my case, I have this in the corosync config :
totem {
cluster_name: HomeLab
config_version: 3
interface {
bindnetaddr: 192.168.7.20...
You talked about the totem magic, is that related to this KB ? https://pve.proxmox.com/wiki/Cluster_Manager#_cluster_network
Talking about "Separate After Cluster Creation" with this example ??
totem {
cluster_name: thomas-testcluster
config_version: 3
ip_version: ipv4
secauth: on...
Super ! Many many thanks again for all your answers. This was really helpful.
Now I will look into how to present network interfaces in different subnet without binding IP on each.
Cheers !
I thank you a lot for your time a patience AlexLup.
I have started over the Ceph confirugation. I ran a "pveceph purge" and re-created the ceph cluster and initialized it on the PUBLIC network this time so I have now the three nodes. See the new configuration :
[global]
auth client...
This is nice.
I tried to modify the Ceph config but now everyhing is broken so I have to figure this out. But no more Ceph storage :) hopefully I only have a test machine on it.
I have build the corosync cluster on .7 network. It's still a LAN / private network. .10 Network is exclusively used by Ceph setup.
I also think that having an additionnal switch will help. I choose a GS110MX which has 2x10GB and 8x1GB. What was the model you choose ?
Yeah but the thing is that both first nodes are connected togheter with a direct cable so it's impossible to put the NUC between... except with a switch :( So I definitively need a switch. I should have one tomorrow.
Many thanks !
I fully understand.
Just to be very clear :
- All three nodes have cluster IP in network 192.168.7.0/24. This is where Proxmox cluster setup has been done.
- Two of three nodes have Ceph mon / initialization in network 192.168.10.0/24. This is non-switched network as there is a cross cable...
Hi Alexlup,
Of course I mentionned 3 message above :P No worries... In fact my cluster has well 3 nodes now but the only thing is that when one of the two nodes having Ceph deployed, the other node lost access to the storage pool and is able to see it and restart the VM only after the other...
Hello AlexLup & others,
I have setup my cluster and it works like a charm for HA. Except for one thing; ceph is lost when 1 of two nodes goes down :)
My guess is that on the NUC I have also to configure Ceph as third monitor node... If that's well the case, can I achieve that without having...
Yep, I will put this third "node" in the same cluster network as 2 others. I will buy a NUC J3455 with 8 GB of RAM and 32 Go local SSD for Proxmox install.
I have to do some research to learn to manage Live migration rules to avoid machines to try to go the this NUC.
Many thanks again for all...
Perfect thanks !
I did not mention it but of course I have 2x240 gb ssd for OS only.
last thing; i have 6 NICs on my first 2 hosts. Is the fact the third node will have only one an issue for the cluster HA / live migration even if no VM should run on it ?
thank you
Many thanks.
So, to summarize, I will have 3 servers :
2 with several resources to hosts VMs and Disk (2x500 SSD for data in each machine)
1 shuttle like server with very few resources compared to 2 others and 1 SSD drive for OS only
With that, even if I clearly understand it's not recommended...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.