[...]
I have
one nic per node but the node that is going to have the storage has a couple of free slots so I can add a second network card if I need it. That's the plan, one per node and the storage node will have one or two. There other two nodes are mini computers (
msi cubi) so I don't think I will add an extra network card unless I get a usb card which proxmox may not even recognise.
The QOS questions is interesting, what could I use QOS for ?
[...]
Oversimplified you use QOS to make sure that each protocol gets the bandwith it needs, but also can use more if its available.
Easiest way is to do Vlan segregation as symmcon and about every wiki/guide suggests. The poor mans way is to do QOS it via Vlans on the NIC-level, by using different nic(s)/vmbrX(s) for different Vlans. That requirers multiple Nics (at leat 2, better 3 if doing Ceph). Or you can use use the switching side to take care of QOS, but that requires Switches to support such features.
[...]
It will probably have some production services but It will be mainly for personal use so I don't mind if it fails every now and then.
[...]
If that is the case and you can live with occasional saturation of your single 1G links, then you are fine with single 1G links. How many OSD's your planning on running with ceph (HDD's dedicated to ceph-storage) ??
Not sure if you were planning to install Ceph on all 3 nodes or just one the one which can be upgraded to multiple nics,but ceph has that pesky habbit of saturating 1G links during backfill , heavy usage or scrub operations. Why is that an issue, because Corosync seems to be susceptible to jitter, causing your cluster to have "red node" issues. which then causes manual input. There are also other issues, but they all stem from saturating the available bandwith.
compare
https://forum.proxmox.com/threads/nodes-going-red.24671/ for reference.
Unless ofc you use QOS, or very restrictive ceph-settings / limits to smooth this over.
[...]
What is the backplane speed ? both switches are meant to support 16Gbps.[...]
Backplane speed basically means how much bandwith the Switch can handle concurrently (ingress+egress). If you use 1G + 1G + 2x 1G on your hosts and that is never going to be increased, you need a backplane speed of at least 8 Gbit, since Gigabit Ethernet is full duplex. If its not full duplex you wanna start screaming at the Nic/switch producer or cable patcher immediately
[...]
Well, in amazon I just checked that the price difference for the 8 port switch and the 16 port switch (d-link dgs-1100) is insignificant, there is even a cheaper 16 port option lol.
You probably wanna have a a look at a proper price search engine like skintflint.co.uk / geizhals.eu (Hardware > Wired Network > Switches) , if only to compare products and get a feel for what "is out there" and then buy those elsewhere
A generally great read is also this post:
https://forum.proxmox.com/threads/what-10g-switches-to-use-how-to-do-qos-ovs-ovn-sdn.25125/ from a guy that runs a "home Lab" using proxmox + ceph and sub-par network equipment. Even better yet is his original post here:
https://www.reddit.com/r/networking/comments/3w62gt/been_assigned_to_completely_redo_the_enterprise/
It is a great read, not just for network gear and configs and ceph usage, but also for all the software that gets floated (that one can run on ones homelab). Definitely a good place to educate yourself on "stuff to avoid".