Could we get some 2nd and 3rd opinions of a plan for a new datacenter deployment:
8 PVE Hosts, each has two 16 core Xeons and 512GB of reg. Ram. We further have 4 10GbE NICs in each machine, two of those should handle guest traffic, the other two are for storage traffic. Each machine will have 4 Samsung PM1653 8TB SAS SSD drives on a HBA for the use of Ceph.
Storage and guest traffic is being handled by independent, redundant switches. All DAC connections.
Guest profiles will be mixed, from archive machines with very few movements to SQL Appliances.
Backup will be to a baremetal PBS, at least one 16 core Xeon, ZFS on HBA or LVM on Raid. Of course it will be replicated to another site.
The main (only) problem i can see would be the 10GbE being a bottleneck for the Ceph traffic. Any ideas / experiences on that matter?
Any input is much appreciated. Thank you!
8 PVE Hosts, each has two 16 core Xeons and 512GB of reg. Ram. We further have 4 10GbE NICs in each machine, two of those should handle guest traffic, the other two are for storage traffic. Each machine will have 4 Samsung PM1653 8TB SAS SSD drives on a HBA for the use of Ceph.
Storage and guest traffic is being handled by independent, redundant switches. All DAC connections.
Guest profiles will be mixed, from archive machines with very few movements to SQL Appliances.
Backup will be to a baremetal PBS, at least one 16 core Xeon, ZFS on HBA or LVM on Raid. Of course it will be replicated to another site.
The main (only) problem i can see would be the 10GbE being a bottleneck for the Ceph traffic. Any ideas / experiences on that matter?
Any input is much appreciated. Thank you!