Hey everyone, I am looking to build a small 3-node cluster to test at work for small scale deployments. A Proof of Concept (PoC) pretty much.
I plan on testing a hyper-converged configuration, no shared iSCSI storage for now.
I have three Dell R240 servers to use. These are servers I have unused and in my stock, so I'm re-purposing them for Proxmox.
Server specs:
I am not sure if I should connect the 10/25/40GB NICs of each host to the other hosts or if I should utilize a 10/25/40GB switch and have all hosts connect back there. I think that if I were testing CEPH, it is best to use switches but if I do ZFS, would I be fine to interconnect the servers together without a switch?
I was thinking of configuring the 4 SSDs in each server for ZFS but I'm not sure what ZFS configuration I would want. I think from what I've read that ZFS would be more ideal with my setup than CEPH (due to the amount of disks).
As this is a Proof of Concept and will only see light business duty for IT staff, there is no real risk here. If/when we decide to move to Proxmox, the hardware will be current and properly built with a 3rd party vendor.
For those of you that are also testing Proxmox in production environments, any suggestions for my setup? Is ZFS a good file system to use for our storage?
Any suggestions on what I can look at doing to this potential build? We have all of the hardware so we would like to use what we have, as this is a Proof of Concept.
I am coming from a heavy Dell-VMware environment with shared iSCSI storage.
I will gladly document my build here and progress if it helps others.
I plan on testing a hyper-converged configuration, no shared iSCSI storage for now.
I have three Dell R240 servers to use. These are servers I have unused and in my stock, so I'm re-purposing them for Proxmox.
Server specs:
- single Xeon E-2236 3.4GHz with 64GB of memory.
- Dell Boss-S1 card with 2x 256GB SSDs in RAID-1 for Proxmox OS installation
- 4x Dell enterprise 960GB SSDs (Storage) (NO Dell RAID used for storage SSDs)
- 2x 1GB NICS (Onboard)
I am not sure if I should connect the 10/25/40GB NICs of each host to the other hosts or if I should utilize a 10/25/40GB switch and have all hosts connect back there. I think that if I were testing CEPH, it is best to use switches but if I do ZFS, would I be fine to interconnect the servers together without a switch?
I was thinking of configuring the 4 SSDs in each server for ZFS but I'm not sure what ZFS configuration I would want. I think from what I've read that ZFS would be more ideal with my setup than CEPH (due to the amount of disks).
As this is a Proof of Concept and will only see light business duty for IT staff, there is no real risk here. If/when we decide to move to Proxmox, the hardware will be current and properly built with a 3rd party vendor.
For those of you that are also testing Proxmox in production environments, any suggestions for my setup? Is ZFS a good file system to use for our storage?
Any suggestions on what I can look at doing to this potential build? We have all of the hardware so we would like to use what we have, as this is a Proof of Concept.
I am coming from a heavy Dell-VMware environment with shared iSCSI storage.
I will gladly document my build here and progress if it helps others.
Last edited: