I've only been running Proxmox now for a couple of months having moved all of my servers from various machines onto a single hardware platform, and realising that the hardware itself is a limiting factor for my expansion, and also if I have a hardware failure I also need to have a plan for a new build to get back up and running quickly.
I've got about 20 LXCs and a couple of VMs running Windows server and MAC OS, but want to add a few more for windows 11 development etc.
I have to say I'm very happy having made the move from discrete hardware platforms to virtualisation, it's working out far better than I hoped....but I need to look at the hardware for my next phase.
Currently I am on an old X99 motherboard, and my short term goal is to try and max out the memory to 64GB as that seems to be my immediate limiting factor, though more cores would also be very much the way I also want to go.
My goal is to use a single server, as I want to reduce my Kwh usage for my home, and consolidating has helped to do that already, but I'd love to move to a lower TDP CPU and cut that usage even further, but keep a high core count.
I'm thinking intel CPU with onboard graphics, I only have limited need for video encoding (mostly Frigate) and I also have a USB coral so hardware passthrough of a full blown video card isn't really somewhere I want to go, also it helps to reduce the power consumption that way as well.
Now the bigger question is about storage management. I have 5 x 2TB nvme drives using zfs2-0 so need a system with a lot of PCI lanes, and either plenty of M.2 support on board, or where I can use bifurcation. I also have 5 SATA drives, but adding that support through PCIE is easy enough if there aren't enough sockets on board, I have 5 drives currently including a pair of SSD mirrored I use as the proxmox OS drives.
Modern enthusiast or home build PCs seem to have very few PCIE slots, and bifurcation is a bit of a minefield still. I'd probably stick with DDR4 as well, just to keep the costs down, but probably want to up my current 32GB to 128GB.
So what is everyone using to build with for new servers today? Are you using intel Z90 motherboards, or at least something using socket 1700.
How are you handling larger numbers of nVME drives, in future I may want to ad a few more to expand from 5 to 7 or 8.
What is your experience with low power CPUs with onboard graphics from intel.
Or is AMD the better platform now? They don't seem to look for on chip graphics so much which is why I was looking more to intel, and again AMD don't focus much on low power parts either.
I will stick a budget on here, let's say £1000 to include, motherboard, CPU, RAM
I know I'm putting all my eggs in one basket hosting everything on a single PC, but I do have a cold standby machine with proxmox on I can get up and running in an emergency with the key VMs I backup offline, so I'm not completely without a failure strategy...just don't want to over invest as a home lab user.
So what are the current trends and recommendations?
I've got about 20 LXCs and a couple of VMs running Windows server and MAC OS, but want to add a few more for windows 11 development etc.
I have to say I'm very happy having made the move from discrete hardware platforms to virtualisation, it's working out far better than I hoped....but I need to look at the hardware for my next phase.
Currently I am on an old X99 motherboard, and my short term goal is to try and max out the memory to 64GB as that seems to be my immediate limiting factor, though more cores would also be very much the way I also want to go.
My goal is to use a single server, as I want to reduce my Kwh usage for my home, and consolidating has helped to do that already, but I'd love to move to a lower TDP CPU and cut that usage even further, but keep a high core count.
I'm thinking intel CPU with onboard graphics, I only have limited need for video encoding (mostly Frigate) and I also have a USB coral so hardware passthrough of a full blown video card isn't really somewhere I want to go, also it helps to reduce the power consumption that way as well.
Now the bigger question is about storage management. I have 5 x 2TB nvme drives using zfs2-0 so need a system with a lot of PCI lanes, and either plenty of M.2 support on board, or where I can use bifurcation. I also have 5 SATA drives, but adding that support through PCIE is easy enough if there aren't enough sockets on board, I have 5 drives currently including a pair of SSD mirrored I use as the proxmox OS drives.
Modern enthusiast or home build PCs seem to have very few PCIE slots, and bifurcation is a bit of a minefield still. I'd probably stick with DDR4 as well, just to keep the costs down, but probably want to up my current 32GB to 128GB.
So what is everyone using to build with for new servers today? Are you using intel Z90 motherboards, or at least something using socket 1700.
How are you handling larger numbers of nVME drives, in future I may want to ad a few more to expand from 5 to 7 or 8.
What is your experience with low power CPUs with onboard graphics from intel.
Or is AMD the better platform now? They don't seem to look for on chip graphics so much which is why I was looking more to intel, and again AMD don't focus much on low power parts either.
I will stick a budget on here, let's say £1000 to include, motherboard, CPU, RAM
I know I'm putting all my eggs in one basket hosting everything on a single PC, but I do have a cold standby machine with proxmox on I can get up and running in an emergency with the key VMs I backup offline, so I'm not completely without a failure strategy...just don't want to over invest as a home lab user.
So what are the current trends and recommendations?