Is a 3-node Full Mesh Setup For Ceph and Corosync Good or Bad

cluster expand
We do not foresee any cluster expansion in the future. We sized our new, potential, hardware based on our current hardware, and our current hardware in a 2-node cluster has a ton of spare capacity. We would most likely just scale each node up (bigger CPU, more RAM, more disks) if needed.

can you imagine how much less cable mess do we have that way
That is a big reason of why we want to go 25Gb and mesh. All the mesh network cables would be nice and tight next to the nodes. On a tangent, if we did go with a switched architecture, we would need 2x10Gb links, bonded, for both Ceph private and public networks = 4 cables per node. That doesn't connect us to redundant switches though...

mental quest for me - I like it!
I appreciate your thoughtfulness into your reply!
 
Have a few production PVE+Ceph clusters using a custom FRR with fallback setup, using corosync both inside the mesh and the outside nics. Zero issues at all for years. Usually deploy them when customer has a tight budget and they require Ceph on 25G+ (or plan to add more nodes in the foreseeable future). If 10G network is enough, we just buy switches as they are affordable enough and simplify adding/replacing nodes.
This sounds pretty similar to the architecture we are going for, and the limitations we are under (minus the adding more nodes in the future).

Are you saying that if 10Gb is enough for Ceph traffic, you use a switched network instead of mesh, you only go the mesh route once you need 25Gb of bandwidth?
 
I should add that going with 25Gb on a switched network also helps with cable count. We would want redundant switches and redundant network cables per node. So with 25Gb, we need 2 cables for Ceph public and 2 Cables for Ceph private, one of each going to each switch. If a switch dies, no problem, all traffic goes through other switch. If a cable is pulled, no problem, all traffic goes through other cable.

If we were to go 10Gb networking, we would want bonded 2x10Gb for each link, therefore requiring 8 cables per node.

Then we still have another 4 cables for VM-LAN traffic and management plus at least 1 cable for Corosync (failover on managment link). 13 cables per node with 10Gb networking to multiple switches, yuck! 9 cables with 25Gb switched. 5 with 25Gb mesh.