Hi,
I'm building a cluster with Proxmox mainly for running LXC CTs. I have 5 identical servers all connected through a 40Gbps switch. I also want to use the same servers for Ceph so they can share their local disks with each other.
In PVE Ceph wiki page it recommends 3 NICs for a high performance setup.
However, my servers have 2 NICs;
Does this mean I meet the requirements for high performance setup? Can I use 1x1Gbps port for corosync, 1x40Gbps port for Ceph+public internet and 1x40Gbps for internal Ceph traffic?
Would this be ideal? What are your thoughts?
I'm building a cluster with Proxmox mainly for running LXC CTs. I have 5 identical servers all connected through a 40Gbps switch. I also want to use the same servers for Ceph so they can share their local disks with each other.
In PVE Ceph wiki page it recommends 3 NICs for a high performance setup.
However, my servers have 2 NICs;
- 4 port 1Gbps NIC on the mainboard
- 2 port 40Gbps NIC PCIe network card
Does this mean I meet the requirements for high performance setup? Can I use 1x1Gbps port for corosync, 1x40Gbps port for Ceph+public internet and 1x40Gbps for internal Ceph traffic?
Would this be ideal? What are your thoughts?
Last edited: