I am a little confused about how to setup a small proxmox cluster. My goal is to have 3-4 nodes in a cluster. I do not want to use Ceph nor GlusterFS. Obviously, also no HA or live migration. I do want to use replication though. The goal is in case of failures to recover within hours (not days).
Each node has at leat 2x Gbit NICs. I could easily upgrade all to have 4x Gbit NICs. How should I set them up?
How about:
1st NIC: for Proxmox communication (corosync, replication) with all nodes connected to the same switch. The separate NAS (used as Backup, Template node, etc) is also connected to this switch.
2nd NIC: set as VLAN aware and serves the VMs. I would probably connect the nodes to different switches in case that the switch-uplink gets saturated.
3rd+4th NIC - if needed they could form a LACP bond (together with 2) and serve the VMs.
Is there anything (deeply) flawed about this setup?
I could also separate corosync and replication traffic into different VLANs (both using NIC 1) and assign them different IEEE P802.1p priority levels. To my knowledge this should mitigate the problem that storage traffic might disturb corosync.
Each node has at leat 2x Gbit NICs. I could easily upgrade all to have 4x Gbit NICs. How should I set them up?
How about:
1st NIC: for Proxmox communication (corosync, replication) with all nodes connected to the same switch. The separate NAS (used as Backup, Template node, etc) is also connected to this switch.
2nd NIC: set as VLAN aware and serves the VMs. I would probably connect the nodes to different switches in case that the switch-uplink gets saturated.
3rd+4th NIC - if needed they could form a LACP bond (together with 2) and serve the VMs.
Is there anything (deeply) flawed about this setup?
I could also separate corosync and replication traffic into different VLANs (both using NIC 1) and assign them different IEEE P802.1p priority levels. To my knowledge this should mitigate the problem that storage traffic might disturb corosync.