So currently I am running a 4 node proxmox ceph cluster (adding a 5th in two months). The hardware is the same on all the servers. I have seen a lot of different posts about the ceph/network/management network/corosync network/vm network/and last but not least backup network.
I am wanting some feed back if my current setup could be improved or if it is idea the way it is current setup.
Each node has 2x 1 Gb links and 2x 10 Gb links.
One Gb link is the vm/management/migration/backup link - I believe based on what I have been reading on other people’s setup.
One Gb link for corosync
One 10Gb dedicated to public network
One 10Gb dedicated to cluster network.
Also with this setup I am not able to reach the ceph monitors from inside a VM/container due how the current network is setup. Wouldn’t be much of a problem but I want to be able to use some of ceph protocols (radosgw, etc) and currently I can’t use a prometheus scrapper to get ceph stats for my own dash board because of it is. Hasn’t been an issue really for the last year or so but want to improve and be able to use these things above.
Below is the output of my interface file. Looking for advice to improveor make it on par with how it should be setup, thanks for your time.
auto lo
iface lo inet loopback
iface eno3 inet manual
auto eno1
iface eno1 inet static
address 10.10.10.80/24
mtu 9000
#cephpublic
auto eno2
iface eno2 inet static
address 10.10.15.80/24
mtu 9000
#cephcluster
auto eno4
iface eno4 inet static
address 10.10.5.80/24
#corosync
auto vmbr0
iface vmbr0 inet static
address 192.168.1.80/24
gateway 192.168.1.1
bridge-ports eno3
bridge-stp off
bridge-fd 0
I am wanting some feed back if my current setup could be improved or if it is idea the way it is current setup.
Each node has 2x 1 Gb links and 2x 10 Gb links.
One Gb link is the vm/management/migration/backup link - I believe based on what I have been reading on other people’s setup.
One Gb link for corosync
One 10Gb dedicated to public network
One 10Gb dedicated to cluster network.
Also with this setup I am not able to reach the ceph monitors from inside a VM/container due how the current network is setup. Wouldn’t be much of a problem but I want to be able to use some of ceph protocols (radosgw, etc) and currently I can’t use a prometheus scrapper to get ceph stats for my own dash board because of it is. Hasn’t been an issue really for the last year or so but want to improve and be able to use these things above.
Below is the output of my interface file. Looking for advice to improveor make it on par with how it should be setup, thanks for your time.
auto lo
iface lo inet loopback
iface eno3 inet manual
auto eno1
iface eno1 inet static
address 10.10.10.80/24
mtu 9000
#cephpublic
auto eno2
iface eno2 inet static
address 10.10.15.80/24
mtu 9000
#cephcluster
auto eno4
iface eno4 inet static
address 10.10.5.80/24
#corosync
auto vmbr0
iface vmbr0 inet static
address 192.168.1.80/24
gateway 192.168.1.1
bridge-ports eno3
bridge-stp off
bridge-fd 0