Hey guys,
I'm currently running a single 10year-old 4c/8t esxi host with local consumer SATA SSD raid 5 in my homelab to host my VMs
My current VMs are: 5x debian servers (teamspeak, docker host, chat server, home automation, DLNA server), 3x windows servers (2x AD DC, print server)
I want to switch over to a 3-host proxmox full-mesh configuration, ideally with Ceph for my VMs.
Since I'm new to Ceph and only have minor experience with proxmox (only used single host setups) I could need some input on storage and network.
I only want to host my VMs on Ceph (my data is on an external NAS) and I'm not sure if my planned setup is feasible
I was looking into buying 3 AsrockRack 1U4LW-X570/2L2T RPSU with 6c/12t CPUs and mesh them together with their dual onboard 10GbE and add a PCIe dual 10Gb into each server for the uplink to my core switch. I want 2TB usable capacity (4x SATA SSD max. each host) for my VMs on Ceph and I'm not sure if I need to buy enterprise/prosumer SSDs to have enough performance. I also want to know if either the single 10Gb to each host or the 6c/12t CPU might be a bottleneck in my setup.
These servers also have 2x 1GbE, maybe they can be used for anything usefull that I might have forgotten
I dont want to cheap out on hardware, I just want some input from people that might have some experience with Ceph/Proxmox HCI
Thanks a lot
I'm currently running a single 10year-old 4c/8t esxi host with local consumer SATA SSD raid 5 in my homelab to host my VMs
My current VMs are: 5x debian servers (teamspeak, docker host, chat server, home automation, DLNA server), 3x windows servers (2x AD DC, print server)
I want to switch over to a 3-host proxmox full-mesh configuration, ideally with Ceph for my VMs.
Since I'm new to Ceph and only have minor experience with proxmox (only used single host setups) I could need some input on storage and network.
I only want to host my VMs on Ceph (my data is on an external NAS) and I'm not sure if my planned setup is feasible
I was looking into buying 3 AsrockRack 1U4LW-X570/2L2T RPSU with 6c/12t CPUs and mesh them together with their dual onboard 10GbE and add a PCIe dual 10Gb into each server for the uplink to my core switch. I want 2TB usable capacity (4x SATA SSD max. each host) for my VMs on Ceph and I'm not sure if I need to buy enterprise/prosumer SSDs to have enough performance. I also want to know if either the single 10Gb to each host or the 6c/12t CPU might be a bottleneck in my setup.
These servers also have 2x 1GbE, maybe they can be used for anything usefull that I might have forgotten
I dont want to cheap out on hardware, I just want some input from people that might have some experience with Ceph/Proxmox HCI
Thanks a lot