I'm looking for some suggestions on how to redesign my PM cluster in my Home Lab. Here is the hardware I have available to me.
1). BIGNAS1 - Desktop PC - Ubuntu Server 22.04
Intel I7 12th Gen
64G RAM
M.2 boot volume
10 x 10TB SATA (ZFS)
2). PM1 - HP DL380 G9 Proxmox 8.3.5
48 x Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz (2 sockets)
256G RAM
2 x 1TB RAID1 boot volume (using RAID card)
10 x 10TB SATA drives (ZFS)
3). PM2 - HP DL380 G9 #2 8.3.5
56 x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (2 Sockets)
256G RAM
2 x 3TB RAID1 boot volume (Using RAID card)
No additional drives (though I do have 8 x 3TB SAS available)
4). Synology DS1611+
40TB Synology SHR RAID disk
Used primarily as 3rd backup and replication source/target of specific folders to my buddy's Synology.
There is also a Jellyfin (Plex fork) server on a separate desktop running Ubuntu.
All computers have 10G cards and are connected via DAC (mostly) to an 8 port Mikrotik 10G switch. There are also 2 x 1G POE switches for cameras and misc stuff as well as a 10G switch in my home office.
My plan is to use the enterprise class servers as PM compute nodes and the Ubuntu Server as a NAS server to NFS mount my PM images, templates, and ISOs to each node. My theory is that by not having the storage on the G9's I can "migrate" and "failover" any CT or VM to either PM1 or PM2, even if one were to fail ,and that a 10G network should be able to handle all the traffic with no trouble at all. I would use a Raspberry pi as a 3rd PM node to complete the cluster (there are ways!).
Its a shame to waste the 100TB of Enterprise Class SATA on PM1 so I think I will create a VM on it and use it as a 2nd backup target for all the other computers.
Yeah it's way overkill for a home lab but it's been a fun ride! Any comments, suggestions, or critiques would be appreciated. Thanks for reading!
1). BIGNAS1 - Desktop PC - Ubuntu Server 22.04
Intel I7 12th Gen
64G RAM
M.2 boot volume
10 x 10TB SATA (ZFS)
2). PM1 - HP DL380 G9 Proxmox 8.3.5
48 x Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz (2 sockets)
256G RAM
2 x 1TB RAID1 boot volume (using RAID card)
10 x 10TB SATA drives (ZFS)
3). PM2 - HP DL380 G9 #2 8.3.5
56 x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (2 Sockets)
256G RAM
2 x 3TB RAID1 boot volume (Using RAID card)
No additional drives (though I do have 8 x 3TB SAS available)
4). Synology DS1611+
40TB Synology SHR RAID disk
Used primarily as 3rd backup and replication source/target of specific folders to my buddy's Synology.
There is also a Jellyfin (Plex fork) server on a separate desktop running Ubuntu.
All computers have 10G cards and are connected via DAC (mostly) to an 8 port Mikrotik 10G switch. There are also 2 x 1G POE switches for cameras and misc stuff as well as a 10G switch in my home office.
My plan is to use the enterprise class servers as PM compute nodes and the Ubuntu Server as a NAS server to NFS mount my PM images, templates, and ISOs to each node. My theory is that by not having the storage on the G9's I can "migrate" and "failover" any CT or VM to either PM1 or PM2, even if one were to fail ,and that a 10G network should be able to handle all the traffic with no trouble at all. I would use a Raspberry pi as a 3rd PM node to complete the cluster (there are ways!).
Its a shame to waste the 100TB of Enterprise Class SATA on PM1 so I think I will create a VM on it and use it as a 2nd backup target for all the other computers.
Yeah it's way overkill for a home lab but it's been a fun ride! Any comments, suggestions, or critiques would be appreciated. Thanks for reading!