We've been using a traditional SAN with iscsi for over 10 years, it has been ultra reliable.
Now looking at ceph and have built a 3-server ceph cluster with Dell R740xds.
The device has six interfaces, three to one switch, three to another.
One port is public internet
One port is public ceph
One port is internal ceph
This all works fine.
Our plan was to add these servers to the existing cluster, build ceph and then migrate storage.
Now the servers that already exist only have four interfaces, one public internet and one iscsi one to each switch. It does not have another nic for the internal ceph which I don't believe it needs. They share the same subnets i.e 10.0.0.0/24 but not the same IPs, i.e old hosts are 10.0.0.10-20 and new ones are 10.0.0.30-40, all within the same /24.
When we added these, it was fine, as soon as we even just installed ceph, it caused things to go crazy, all host nodes starting shutting down and rebooting.
Before I waste a lot of time on this, nodes that are not actively doing the storage part (i.e ceph clients only), don't need access to the internal ceph range, only the public ceph range, if someone can confirm? These client nodes only have minimal boot storage to start and then currently, connect to iscsi.
Now looking at ceph and have built a 3-server ceph cluster with Dell R740xds.
The device has six interfaces, three to one switch, three to another.
One port is public internet
One port is public ceph
One port is internal ceph
This all works fine.
Our plan was to add these servers to the existing cluster, build ceph and then migrate storage.
Now the servers that already exist only have four interfaces, one public internet and one iscsi one to each switch. It does not have another nic for the internal ceph which I don't believe it needs. They share the same subnets i.e 10.0.0.0/24 but not the same IPs, i.e old hosts are 10.0.0.10-20 and new ones are 10.0.0.30-40, all within the same /24.
When we added these, it was fine, as soon as we even just installed ceph, it caused things to go crazy, all host nodes starting shutting down and rebooting.
Before I waste a lot of time on this, nodes that are not actively doing the storage part (i.e ceph clients only), don't need access to the internal ceph range, only the public ceph range, if someone can confirm? These client nodes only have minimal boot storage to start and then currently, connect to iscsi.