Hello there,
We want to build a Proxmox-Cluster with 3 nodes, Ceph is used as hyperconverged storage.
We will use 3 Dell PowerEdge R710s for this, with an SFP+ Card installed (2 Ports), and an HBA-Adapter instead of the iPERC/6.
I still have some unanswered questions regarding this setup, tips are very appreciated!
So we will have an pfSense Firewall, then the 3 nodes connected to Network1, which is the "public" network with the IPs from the management-interfaces, VMs etc.
The SFP+ Cards will be configured in a Full-Mesh-Network for Ceph, so that every node is connected to the other 2.
I stumbled upon some posts which say that corosync should have it's own network as well... our problem is that we have no place for a switch anymore (public network will use switchports on the firewall directly), so my first question is, is it possible to use corosync3 in an Full Mesh configuration like ceph?
We will use 2 SSDs per Node in ZFS Raid1 as boot-drives, and for now add a single 4TB HDD in each node, which will be the OSDs. In the future, how should we configure ceph if we want to stock up the HDDs a bit, say 2 4TB HDDs per Node? Is it possible/wise to mix HDDs on a node, so let's say have 1 4TB and 1 2TB HDD per node, or even mix up storage size across nodes, e.g. node 1 total of 6TB, node 2 total of 8TB?
Third question, can we use ifupdown2 as drop-in-replacement so we do not have to restart every time the config is changed?
If you need any more information, feel free to ask, I hope my English is not to crude for you to understand.
All the best!
We want to build a Proxmox-Cluster with 3 nodes, Ceph is used as hyperconverged storage.
We will use 3 Dell PowerEdge R710s for this, with an SFP+ Card installed (2 Ports), and an HBA-Adapter instead of the iPERC/6.
I still have some unanswered questions regarding this setup, tips are very appreciated!
So we will have an pfSense Firewall, then the 3 nodes connected to Network1, which is the "public" network with the IPs from the management-interfaces, VMs etc.
The SFP+ Cards will be configured in a Full-Mesh-Network for Ceph, so that every node is connected to the other 2.
I stumbled upon some posts which say that corosync should have it's own network as well... our problem is that we have no place for a switch anymore (public network will use switchports on the firewall directly), so my first question is, is it possible to use corosync3 in an Full Mesh configuration like ceph?
We will use 2 SSDs per Node in ZFS Raid1 as boot-drives, and for now add a single 4TB HDD in each node, which will be the OSDs. In the future, how should we configure ceph if we want to stock up the HDDs a bit, say 2 4TB HDDs per Node? Is it possible/wise to mix HDDs on a node, so let's say have 1 4TB and 1 2TB HDD per node, or even mix up storage size across nodes, e.g. node 1 total of 6TB, node 2 total of 8TB?
Third question, can we use ifupdown2 as drop-in-replacement so we do not have to restart every time the config is changed?
If you need any more information, feel free to ask, I hope my English is not to crude for you to understand.
All the best!