Hi all,
I'm hoping I can get some insightful input from this community. I'm looking at building out a new Proxmox cluster with Ceph as the backend. I'd like to get a couple of answers to see if this makes sense and if not where I should make changes.
http://bit.ly/2XvgrXa
Its a 2U 4 Node Super Micro Server.
I would like to boot and run PROXMOX off of an internal M.2 SSD and have the CEPH storage on each node with 6X2TB or 6X4TB SSD's
I have some questions:
Each node will have 2 10GB Ethernet ports. 1 for the public network and then 1 for the private Ceph and cluster configuration network. Will this work? Would this make more sense to have each server with 4 x 10GB Ethernet ports?
I ask this as I currently have the following but want to make sure this all works properly. I currently have a 1GB front end public facing network, 1GB private cluster configuration network, a 10GB network for storage mounts over NFS, and a 10GB network for my small CEPH (test) environment.
Can I have the networks as a 10GB network for public facing along with a combined 10GB for the cluster configuration and CEPH storage? Ideally I would have the cluster and Ceph networks VLANed into separate networks so there is no traffic interference. I would also like to my backup storage over NFS mounted 10GB VLAN as well with Jumbo Frames.
Does anyone see any issues with this type of deployment?
The VM's on the CEPH network will be primarily EMAIL (Axigen), PMG Servers, a couple of DNS Servers (BIND), Email Archive Server (Mailarchiva), a firewall (Kerio Control and or PFSense) and a document server.
This isn't high I/O based stuff going on here, but I'd like to make sure CEPH and this type of setup can handle the demands.
So with that, here is what I am thinking;
• 3 Nodes of the above configuration (It can do 4 but Ill keep it for expansion down the road)
• 256GB +/- RAM
• 2 X 10GB ethernet ports / server
• 1 X 128 and or 256GB M.2 SSD for Boot Drive
• 18 X 2TB and or 4 TB Samsung 850 SSD
• 2 X ??? INTEL CPU with X cores once I figure that out as well.
Can I get some input to say if this is the right way to go, or if I should rethink this. The thinking is that I'd like this to be as HA as possible along with keeping storage stable and expandable along with the performance by adding in more nodes as it grows.
Thank you for any constructive criticisms you can provide me here.
I'm hoping I can get some insightful input from this community. I'm looking at building out a new Proxmox cluster with Ceph as the backend. I'd like to get a couple of answers to see if this makes sense and if not where I should make changes.
http://bit.ly/2XvgrXa
Its a 2U 4 Node Super Micro Server.
I would like to boot and run PROXMOX off of an internal M.2 SSD and have the CEPH storage on each node with 6X2TB or 6X4TB SSD's
I have some questions:
Each node will have 2 10GB Ethernet ports. 1 for the public network and then 1 for the private Ceph and cluster configuration network. Will this work? Would this make more sense to have each server with 4 x 10GB Ethernet ports?
I ask this as I currently have the following but want to make sure this all works properly. I currently have a 1GB front end public facing network, 1GB private cluster configuration network, a 10GB network for storage mounts over NFS, and a 10GB network for my small CEPH (test) environment.
Can I have the networks as a 10GB network for public facing along with a combined 10GB for the cluster configuration and CEPH storage? Ideally I would have the cluster and Ceph networks VLANed into separate networks so there is no traffic interference. I would also like to my backup storage over NFS mounted 10GB VLAN as well with Jumbo Frames.
Does anyone see any issues with this type of deployment?
The VM's on the CEPH network will be primarily EMAIL (Axigen), PMG Servers, a couple of DNS Servers (BIND), Email Archive Server (Mailarchiva), a firewall (Kerio Control and or PFSense) and a document server.
This isn't high I/O based stuff going on here, but I'd like to make sure CEPH and this type of setup can handle the demands.
So with that, here is what I am thinking;
• 3 Nodes of the above configuration (It can do 4 but Ill keep it for expansion down the road)
• 256GB +/- RAM
• 2 X 10GB ethernet ports / server
• 1 X 128 and or 256GB M.2 SSD for Boot Drive
• 18 X 2TB and or 4 TB Samsung 850 SSD
• 2 X ??? INTEL CPU with X cores once I figure that out as well.
Can I get some input to say if this is the right way to go, or if I should rethink this. The thinking is that I'd like this to be as HA as possible along with keeping storage stable and expandable along with the performance by adding in more nodes as it grows.
Thank you for any constructive criticisms you can provide me here.