10GbE cluster network(s) - 3 nodes without switch

vobo70

Well-Known Member
Nov 15, 2017
61
2
48
53
Warsaw, Poland
Hi,
I made fresh installation for 3 cluster node with network as follows:

lhGTDoD.png


is there any option to set up a cluster?
if not anything I can do to set up a cluster?
Now each node has hosts to point to other two nodes, and at OS level communication are working properly
But at set up cluster seems nodes have to be at same network ...
I don't know so much about linux routing but I don't thing it's possible on one network without switch
and 10GbE switch is expensive for my home lab and second hand LAN cards and uses a lot of energy
regards,
Maciek
 
OK - second option:
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
routed setup works great,
I have:
bridge on 1GbE card for external access
and dual 10GbE for mesh cluster for each node.
All cards set MTU at 9000.
But ceph gives me error on each node:
root@pve2:~# pveceph status
command 'ceph -s' failed: got timeout
tried to find solution but no luck.
Can someone help me?
Or should I post a new thread?
Thanks in advance.
 
OK - second option:
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
routed setup works great,
I have:
bridge on 1GbE card for external access
and dual 10GbE for mesh cluster for each node.
All cards set MTU at 9000.
But ceph gives me error on each node:
root@pve2:~# pveceph status
command 'ceph -s' failed: got timeout
tried to find solution but no luck.
Can someone help me?
Or should I post a new thread?
Thanks in advance.
Post the respective /etc/network/interfaces sections from each of those ceph nodes.
 
Here you are:
pve1:

auto lo iface lo inet loopback iface eno1 inet manual mtu 9000 auto enp1s0f0 iface enp1s0f0 inet static address 192.168.20.10/24 mtu 9000 up ip route add 192.168.20.30/32 dev enp1s0f0 down ip route del 192.168.20.30/32 auto enp1s0f1 iface enp1s0f1 inet static address 192.168.20.10/24 mtu 9000 up ip route add 192.168.20.20/32 dev enp1s0f1 down ip route del 192.168.20.20/32 auto vmbr0 iface vmbr0 inet static address 192.168.10.11/24 gateway 192.168.10.1 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 9000

pve2:

auto lo iface lo inet loopback iface eno1 inet manual mtu 9000 auto enp1s0f0 iface enp1s0f0 inet static address 192.168.20.20/24 mtu 9000 up ip route add 192.168.20.10/32 dev enp1s0f0 down ip route del 192.168.20.10/32 auto enp1s0f1 iface enp1s0f1 inet static address 192.168.20.20/24 mtu 9000 up ip route add 192.168.20.30/32 dev enp1s0f1 down ip route del 192.168.20.30/32 auto vmbr0 iface vmbr0 inet static address 192.168.10.12/24 gateway 192.168.10.1 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 9000

pve3:

auto lo iface lo inet loopback iface eno1 inet manual mtu 9000 auto enp1s0f0 iface enp1s0f0 inet static address 192.168.20.30/24 mtu 9000 up ip route add 192.168.20.20/32 dev enp1s0f0 down ip route del 192.168.20.20/32 auto enp1s0f1 iface enp1s0f1 inet static address 192.168.20.30/24 mtu 9000 up ip route add 192.168.20.10/32 dev enp1s0f1 down ip route del 192.168.20.10/32 auto vmbr0 iface vmbr0 inet static address 192.168.10.13/24 gateway 192.168.10.1 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 9000
 
The files look fine from here. Have you triple checked that the 3 wires are actually interconnecting pve1 enp1s0f0 - pve3 enp1s0f1, pve1 enp1s0f1 - pve2 enp1s0f0 and pve2 enp1s0f1 - pve3 enp1s0f0? Have you either rebooted or run ifreload -a to apply the network config on each node? Does ip a show each of the corresponding full mesh interfaces as up or down? can you ping 192.168.20.10, 192.168.20.20, and 192.168.20.30 from the other machines?

So I just realized that your initial question did not mention ceph but later you ask about it. Are you just trying the set up a cluster over this mesh, setup ceph or both?
 
I checked all networks settings, nodes were rebooted, ping working for each other machines,
I know that I messing with thread about network, but the problem is I cannot run ceph.
There were no errors during installation;
I did everything as in doc but ceph gives me timeout error;
this is my /etc/ceph/ceph.conf
Code:
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         cluster network = 192.168.20.0/24
         fsid = 9f47e518-4613-4564-863b-e8d3a923a1f5
         mon_allow_pool_delete = true
         mon_host = 192.168.20.10
         ms_bind_ipv4 = true
         ms_bind_ipv6 = false
         osd_pool_default_min_size = 2
         osd_pool_default_size = 3
         public_network = 192.168.20.0/24

[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring

[mon.pve1]
         public_addr = 192.168.20.10

Should I post it in another threat at correct forum part?
 
The only diff I have in my ceph.conf is that I put a monitor on each of the 3 mesh nodes, instead of just one. Did you read through the Deploy Hyper-Converged Ceph Cluster page?

On that page I specifically see the quote below that suggests the Ceph WebGUI setup is the beginning but not all that is required for everything to work.

Your system is now ready to start using Ceph. To get started, you will need to create some additional monitors, OSDs and at least one pool.
 
OK, but as I mentioned my ceph.conf is different than yours as I have setup ceph monitors listed on all 3 host.

Code:
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         cluster_network = 10.15.15.20/24
         fsid = 67f4fe3d-5b5a-469a-80d3-bfae71eb1037
         mon_allow_pool_delete = true
         mon_host = 10.15.15.20 10.15.15.18 10.15.15.19
         ms_bind_ipv4 = true
         ms_bind_ipv6 = false
         osd_pool_default_min_size = 2
         osd_pool_default_size = 3
         public_network = 10.15.15.20/24

[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring

[mon.axiom1]
         public_addr = 10.15.15.18

[mon.axiom2]
         public_addr = 10.15.15.19

[mon.pve]
         public_addr = 10.15.15.20
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!