Perfect.Yes do the whole networking before starting the clustering. Do it web-ui only.
So I'm 100% clear the ceph public network does not need a connection to anything outside in this arrangement?No 5) does not need connection to anything else.
I read somewhere that three nodes will work but is not ideal.As I said no 3node cluster via two rooms
What do mean with that? theres no issue using 3 nodes for production?
I still have a Xeon powered workstation that was to be my witness host in VMware. I also have a barebones server. So could I, and should I setup a fourth server, two in each building, as a five node cluster using this Xeon workstation (or even PVE with PBS on top) in a separate third room for quorum? If so that would need (I think) connections to web UI, corosync, public and cluster network. That would be 2 data nodes per building and 5 monitor nodes with the fifth just a low power machine with no storage for quorum. Is it safe to assume I can start with the three I have, everything in one rack, and when I get time and money add the fourth node and go 2, 2, 1? I'm asking because my current sfp switches wouldn't have enough ports for 5 machines.This does also not work later on. You need to think about every possible downtime-outcome: if you use three nodes in one room, and this room fails you still have the same problem I already explained Two room setups start with Room 1: 2 nodes | Room 2: 2 nodes | Room 3: Quorum Node doing a SIZE=4 MINSIZE=2 setup (with custom crushrule) or 3 Rooms with each having a full pve-ceph node (including storage not only quorum purpose host) and SIZE=3 MINSIZE=2.
I believe this kind of answers my above question. Corosync and ceph have to attach to the quorum node.By default (without using custome crushrules you still write 4 copies (and they can be anywhere if you dont define room entities in crushmap). Doing a 5way replication (you say: each server has a copy) is absolute OVERKILL. Go for a 4way mirror (2 copies per room) while having the quorum node in room3. Quorum Node can be something really cheap, aslong as you have network in corosync and ceph.
Thank you Jonas!Yeah go for a single room setup and get familiar with ceph and do lots of testing before going live with anything important. As I said above, you would need 2 more servers (one with osds) and the second just for quorum (third room).
Be careful with the wording. Proxmox VE does not offer mirroring for two complete seperated clusters. The example we talked about here is ONE Cluster but with 6 Nodes, that are fully usable and avaiable, but they use intelligten replica-placement, so you can loose up to a room, without having lots of downtime.
Theres a way to setup ceph-mirror between two seperated cluster, but this is a lot more complex then ceph already is for beginners.
Your Welcome, ceph is complex in the beginning but youll love it once you have a perfect setup and see how easy and secure it is (if setup correct)
I am going to go three for now as long as I can add nodes later, which it appears I can. I'd rather spend money on enterprise SSDs right now than 5 nodes which requires larger sfp+ switches, more cable to third room and an entire server build out. One room/building will get me going.