Hi,
I'm currently thinking about building a server for a proxmox installation.
This will provide a VM environment for a small company, with the following VMs (very basic details here, I know):
- 3x Win 2019 Server, 2Gb RAM, 1 vCPU, 40Gb disk;
- 1x Win 2019 Server, 4Gb RAM, 2 vCPU, 40Gb disk;
- 1x Linux Ubuntu Server, 4Gb RAM, 2 vCPU.
The windows machines will be used to run SQL Server instances with some 100-200 but very small databases (from 100mb to 400mb each, the windows with more resources will have 2x 3Gb databases). All with 3 users with non-intensive usage.
I'm thinking about the following build:
motherboard: Supermicro X9DRH-7TF
cpu: 2x Intel Xeon E5-2620 v2, 2.1Ghz, 6 Core (12 Cores total)
heatsink: Supermicro SNK-P0048AP4 (heatsink with fan)
ram: 128Gb - Samsung M393B2G70BH0-CK0 (8x 16Gb RDIMM 1600MHz 1.5v, model in supermicro's mobo list of tested dimms)
chassis: Supermicro CSE-825
rack kit: Supermicro MCP-290-00053-0N
caddies: 8x HDD 3.5"
psu: 1x Supermicro 560W, 80plus
This hardware (no disks) may put me at around 800€-900€.
About the disks, I was thinking about the possibility of going consumer level SDDs, with ZFS, if budget permits.
So I thought:
- 2x Intel 545c 128Gb SATAIII, for VM OS boot in ZFS mirror mode (only proxmox boot and maybe iso images repository in there);
- 4x PCIe to NVMe adapter each with ADATA SX8200 PRO 512GB, for VM storage in raidz2;
- 1x SATAIII HGST 1Tb disk to contain extra VM disks for data backup (scheduled on each VM guest instance) and also for proxmox making backup of the VMs themselfs (I may even thing about putting 3x SAS 150Gb 2.5" 15k disks on raidz1/raid5 for the job, have those disks around).
These disks and adapters will put me in around 460€.
The reason for the extra HGST part is just for the reason that backups imply many writes and deletes (of old deprecated backups), so I would not like for daily backup jobs to contribute to SDD usage. Backups don't need to be lightning fast, or at least not the ones made on a daily regular basis.
So now I have many questions and would be grateful for any answers:
1- Would those build be good for the job? Is there anything or component I should rethink or you think it should be a no-go for a production environment?
2- No idea here ... but would this build, working 24/7, be hard on the electricity bill? The chassis has a 560W PSU, it may (or may not) be enough to power all the components listed, I saw a chassis with 2x 920W PSU, but that would be necessary, or to much of a consumer beast?
3- Am I to expect difficulties putting the PCIe adapters to work with proxmox so that NVMe disks can be used for ZFS and at their full speeds?
4- Should I just forget this NVMe solution altogether, or at least for a production environment since I'm thinking on consumer level disks?
5- How does proxmox ZFS manages SDD disks, performance wise, when it needs to rewrite sectors? Will it have some kind of trim? because ZFS as far as I know, does not "delete" data, it works more in a copy-on-write philosophy. I've found some information online about ZFS+proxmox+SDDtrim, and even on proxmox forums, but have not yet come to a clear conclusion. I would appreciate if you point me in the right direction (I'll have windows and linux guests)
I would appreciate any help, opinions, tips, experience sharing ... any contribution, really.
Thank you for your attention.
I'm currently thinking about building a server for a proxmox installation.
This will provide a VM environment for a small company, with the following VMs (very basic details here, I know):
- 3x Win 2019 Server, 2Gb RAM, 1 vCPU, 40Gb disk;
- 1x Win 2019 Server, 4Gb RAM, 2 vCPU, 40Gb disk;
- 1x Linux Ubuntu Server, 4Gb RAM, 2 vCPU.
The windows machines will be used to run SQL Server instances with some 100-200 but very small databases (from 100mb to 400mb each, the windows with more resources will have 2x 3Gb databases). All with 3 users with non-intensive usage.
I'm thinking about the following build:
motherboard: Supermicro X9DRH-7TF
cpu: 2x Intel Xeon E5-2620 v2, 2.1Ghz, 6 Core (12 Cores total)
heatsink: Supermicro SNK-P0048AP4 (heatsink with fan)
ram: 128Gb - Samsung M393B2G70BH0-CK0 (8x 16Gb RDIMM 1600MHz 1.5v, model in supermicro's mobo list of tested dimms)
chassis: Supermicro CSE-825
rack kit: Supermicro MCP-290-00053-0N
caddies: 8x HDD 3.5"
psu: 1x Supermicro 560W, 80plus
This hardware (no disks) may put me at around 800€-900€.
About the disks, I was thinking about the possibility of going consumer level SDDs, with ZFS, if budget permits.
So I thought:
- 2x Intel 545c 128Gb SATAIII, for VM OS boot in ZFS mirror mode (only proxmox boot and maybe iso images repository in there);
- 4x PCIe to NVMe adapter each with ADATA SX8200 PRO 512GB, for VM storage in raidz2;
- 1x SATAIII HGST 1Tb disk to contain extra VM disks for data backup (scheduled on each VM guest instance) and also for proxmox making backup of the VMs themselfs (I may even thing about putting 3x SAS 150Gb 2.5" 15k disks on raidz1/raid5 for the job, have those disks around).
These disks and adapters will put me in around 460€.
The reason for the extra HGST part is just for the reason that backups imply many writes and deletes (of old deprecated backups), so I would not like for daily backup jobs to contribute to SDD usage. Backups don't need to be lightning fast, or at least not the ones made on a daily regular basis.
So now I have many questions and would be grateful for any answers:
1- Would those build be good for the job? Is there anything or component I should rethink or you think it should be a no-go for a production environment?
2- No idea here ... but would this build, working 24/7, be hard on the electricity bill? The chassis has a 560W PSU, it may (or may not) be enough to power all the components listed, I saw a chassis with 2x 920W PSU, but that would be necessary, or to much of a consumer beast?
3- Am I to expect difficulties putting the PCIe adapters to work with proxmox so that NVMe disks can be used for ZFS and at their full speeds?
4- Should I just forget this NVMe solution altogether, or at least for a production environment since I'm thinking on consumer level disks?
5- How does proxmox ZFS manages SDD disks, performance wise, when it needs to rewrite sectors? Will it have some kind of trim? because ZFS as far as I know, does not "delete" data, it works more in a copy-on-write philosophy. I've found some information online about ZFS+proxmox+SDDtrim, and even on proxmox forums, but have not yet come to a clear conclusion. I would appreciate if you point me in the right direction (I'll have windows and linux guests)
I would appreciate any help, opinions, tips, experience sharing ... any contribution, really.
Thank you for your attention.