In the last thread I didnt have enough info on the servers, I now do.
Im limited on what to do, not enough PCI slots etc. to really use all drives nic etc.
This is a lab cluster with perhaps 20 active VMs spread over all hosts, lots of inactive/low usage vms etc. for POC and what not.
It would be nice to optimize the performance if possible.
the basis is 6x HPE DL360 Gen10+ .
1 25 GB NIC via ocp3
1 25GB nic via one pci slot
24x PM1655 800GB 24g sas ssd ( high dwpd)
24x PM897 960 GB sata
24x Pm9A3 480 nvme m.2 ( 2x on a NS204i-p NVMe PCIe3 OS Boot Device in one pci slot raid1 card)
I now have one pci slot left only, should I use it for rj45 nic för corosync or try to make a pool with the 4 m.2 nvmes?
the nvme drives are read heavy, low dwpd, but it would be nice to have a nvme pool.
I was thinking of buying lots of m.2 to pci adapters to be able to get 4 of these in each node, but then I cant have a dedicated CoroSync NIC.
maybe a 4x m.2 card in the fullheight slot ? gen10+ supposed to supports bifurcation.
Im limited on what to do, not enough PCI slots etc. to really use all drives nic etc.
This is a lab cluster with perhaps 20 active VMs spread over all hosts, lots of inactive/low usage vms etc. for POC and what not.
It would be nice to optimize the performance if possible.
the basis is 6x HPE DL360 Gen10+ .
1 25 GB NIC via ocp3
1 25GB nic via one pci slot
24x PM1655 800GB 24g sas ssd ( high dwpd)
24x PM897 960 GB sata
24x Pm9A3 480 nvme m.2 ( 2x on a NS204i-p NVMe PCIe3 OS Boot Device in one pci slot raid1 card)
I now have one pci slot left only, should I use it for rj45 nic för corosync or try to make a pool with the 4 m.2 nvmes?
the nvme drives are read heavy, low dwpd, but it would be nice to have a nvme pool.
I was thinking of buying lots of m.2 to pci adapters to be able to get 4 of these in each node, but then I cant have a dedicated CoroSync NIC.
maybe a 4x m.2 card in the fullheight slot ? gen10+ supposed to supports bifurcation.