Need opinions on server build for proxmox

zecas

Member
Dec 11, 2019
51
5
13
Hi,

I'm currently thinking about building a server for a proxmox installation.

This will provide a VM environment for a small company, with the following VMs (very basic details here, I know):
- 3x Win 2019 Server, 2Gb RAM, 1 vCPU, 40Gb disk;
- 1x Win 2019 Server, 4Gb RAM, 2 vCPU, 40Gb disk;
- 1x Linux Ubuntu Server, 4Gb RAM, 2 vCPU.

The windows machines will be used to run SQL Server instances with some 100-200 but very small databases (from 100mb to 400mb each, the windows with more resources will have 2x 3Gb databases). All with 3 users with non-intensive usage.

I'm thinking about the following build:

motherboard: Supermicro X9DRH-7TF
cpu: 2x Intel Xeon E5-2620 v2, 2.1Ghz, 6 Core (12 Cores total)
heatsink: Supermicro SNK-P0048AP4 (heatsink with fan)
ram: 128Gb - Samsung M393B2G70BH0-CK0 (8x 16Gb RDIMM 1600MHz 1.5v, model in supermicro's mobo list of tested dimms)
chassis: Supermicro CSE-825
rack kit: Supermicro MCP-290-00053-0N
caddies: 8x HDD 3.5"
psu: 1x Supermicro 560W, 80plus

This hardware (no disks) may put me at around 800€-900€.



About the disks, I was thinking about the possibility of going consumer level SDDs, with ZFS, if budget permits.

So I thought:
- 2x Intel 545c 128Gb SATAIII, for VM OS boot in ZFS mirror mode (only proxmox boot and maybe iso images repository in there);
- 4x PCIe to NVMe adapter each with ADATA SX8200 PRO 512GB, for VM storage in raidz2;
- 1x SATAIII HGST 1Tb disk to contain extra VM disks for data backup (scheduled on each VM guest instance) and also for proxmox making backup of the VMs themselfs (I may even thing about putting 3x SAS 150Gb 2.5" 15k disks on raidz1/raid5 for the job, have those disks around).

These disks and adapters will put me in around 460€.

The reason for the extra HGST part is just for the reason that backups imply many writes and deletes (of old deprecated backups), so I would not like for daily backup jobs to contribute to SDD usage. Backups don't need to be lightning fast, or at least not the ones made on a daily regular basis.


So now I have many questions and would be grateful for any answers:

1- Would those build be good for the job? Is there anything or component I should rethink or you think it should be a no-go for a production environment?

2- No idea here ... but would this build, working 24/7, be hard on the electricity bill? The chassis has a 560W PSU, it may (or may not) be enough to power all the components listed, I saw a chassis with 2x 920W PSU, but that would be necessary, or to much of a consumer beast?

3- Am I to expect difficulties putting the PCIe adapters to work with proxmox so that NVMe disks can be used for ZFS and at their full speeds?

4- Should I just forget this NVMe solution altogether, or at least for a production environment since I'm thinking on consumer level disks?

5- How does proxmox ZFS manages SDD disks, performance wise, when it needs to rewrite sectors? Will it have some kind of trim? because ZFS as far as I know, does not "delete" data, it works more in a copy-on-write philosophy. I've found some information online about ZFS+proxmox+SDDtrim, and even on proxmox forums, but have not yet come to a clear conclusion. I would appreciate if you point me in the right direction (I'll have windows and linux guests)


I would appreciate any help, opinions, tips, experience sharing ... any contribution, really.


Thank you for your attention.
 
Hi zecas,

welcome to this forum! :)

These are my thoughts to your exciting project:

a) Your server hardware base looks good - oldie but goldie! :D

I was thinking about the possibility of going consumer level SDDs
Just three words: Don't do it!
If you like to build a server for business purposes, use enterprise grade hardware. Especially when it comes to your storage backend, don't think about consumer or even prosumer disks. Although budget seems to be an issue for you I would recommend Samsungs SSD series SM883. The 240 GB starts at about 120 Euro (here in Germany). Maybe you could put two of them together into a ZFS RAID1 for Proxmox and stuff, and use another two with the needed size to build a storage for your vdisks the same way (ZFS RAID 1).
When it comes to "storage technology" I'm a little bit oldschool, so I would recommend you to use the 3,5" bays (do I guess it right that they are hot-swappable? That would be a great thing) with 2,5" adapters and "normal" SATA SSDs.
The chassis has a 560W PSU
Hard to say, but I would guess that this is enough. As a wild guess (and depending on your final disk decisions) I would estimate that the server will consume about 150 watts.
 
Last edited:
Hi zecas,

welcome to this forum! :)

These are my thoughts to your exciting project:

a) Your server hardware base looks good - oldie but goldie! :D

Thanks. I appreciate your opinion, going for not-so-recent hardware seems a risk, but I believe that going for quality hardware can provide a good solution.

I just hope that:
- Hardware works well for some years to come (hence going for enterprise stuff);
- Hardware provides good performance for the job at hands and gives room for some growth.




Just three words: Don't do it!
If you like to build a server for business purposes, use enterprise grade hardware. Especially when it comes to your storage backend, don't think about consumer or even prosumer disks. Although budget seems to be an issue for you I would recommend Samsungs SSD series SM883. The 240 GB starts at about 120 Euro (here in Germany). Maybe you could put two of them together into a ZFS RAID1 for Proxmox and stuff, and use another two with the needed size to build a storage for your vdisks the same way (ZFS RAID 1).
When it comes to "storage technology" I'm a little bit oldschool, so I would recommend you to use the 3,5" bays (do I guess it right that they are hot-swappable? That would be a great thing) with 2,5" adapters and "normal" SATA SSDs.

Uhm, those ADATA SSD NVMe disks looked quite nice for the job ... guess I'll have to rethink about this.

The only thing non-enterprise specific would the storage, as I was putting high expectations on securing data with ZFS, saving some bucks on more expensive SSDs (even though I was searching for the best in consumer/prosumer poducts). I think I got somehow convinced by the amount of information online, with people that show a lot of knowledge on ZFS and data security, showing their big storage server solutions with (for example) Western Digital Green HDDs, or other non-enterprise disks.

Since your post I've been digging around, looking at alternatives and how to build this system. I prefer to plan carefully, not just because there is money involved, but because I want to provide a good solution.

So Samsungs SSD series SM883 240Gb look nice, I can get them for 120€ on amazon.de (and price is almost the same as a Samsung 860 Pro 250Gb that sells at ~100€):

SAMSUNG SSD 240GB 2, 5" (6.3cm) SATAIII SM883 Bulk (~120€)
Samsung SSD SM883 480GB SATA 6GB/S (~190€)

Now I've been thinking that for a start, around 500Gb is more than enough but I need to make sure data is safe and I can expand it.

So I'm changing my thoughts about going raidz2 for VM storage, and instead think of a zpool with 2x vdev mirrors. So 4x Samsung SM883 240Gb used in such config will give me 480Gb storage, with 1 disk redundancy per-vdev:
- zdev-1 (mirror) : 2x Samsung SM883 240Gb
- zdev-2 (mirror) : 2x Samsung SM883 240Gb

Even though SSDs are more expensive, comparing with HDDs I would have:
- Western Digital HGST Ultrastar 1Tb - 100€
- Samsung SM883 240Gb - 120€

Thus making 4x SSDs a bit more expensive than 4x HDDs (though not that much expensive I might say), with a big drop on available space (SDD 480Gb vs HDD 2Tb) but a big bonus on performance (on each disk, SSD ~500Mb/s vs HDD ~200Mb/s). So maybe I'll end up with SSDs as a viable solution.


... this looks nice, but out of curiosity, what if:

I add a 3rd SSD disk, non-enterprise one (for cost saving), just so to add a 2 disk redundancy per-vdev? I'm not expecting to be a best practice in the enterprise world, but does anyone used that approach?

In this setup it would end up being 1 zpool with 2 zdevs composed of:
- zdev-1 (mirror) : 2x Samsung SM883 240Gb + 1x Samsung 860 Evo 250Gb
- zdev-2 (mirror) : 2x Samsung SM883 240Gb + 1x Samsung 860 Evo 250Gb

The way I think about it, it would give 1 disk redundancy between enterprise SSDs, but will be "backed-up" by cheaper SSDs for additional safe-of-mind 2nd disk redundancy. Mind that I'm only thinking of such approach because I'm talking about vdev mirrors here.

Rule for replacement: if an SSD fails, it must be replaced by another one of same or higher category (enterprise for enterprise, or consumer for consumer/enterprise one).




When it comes to "storage technology" I'm a little bit oldschool, so I would recommend you to use the 3,5" bays (do I guess it right that they are hot-swappable? That would be a great thing) with 2,5" adapters and "normal" SATA SSDs.

Yes the chassis will have front 3.5" hot swap caddies, so if not going NVMe (which would be connected internally), the solution to consider would be to fit in SATA drives on caddies.

The motherboard has SAS 6Gb/s and SATAIII 6Gb/s, so nowadays I don't see any benefit of going specifically with SAS drives, as I'll probably get similar performance from SATA drives.




Hard to say, but I would guess that this is enough. As a wild guess (and depending on your final disk decisions) I would estimate that the server will consume about 150 watts.

It's good to know that 560W may be enough for the job. Going SSDs will also help lowering the power consumption.

I'm still thinking about that 2x 920W PSU possibility (PWS-920P-SQ), in that scenario I would have a 2U chassis with 12x 3.5" hot swap caddies at front panel (like this CSE-826BE16-R920LPB).

I'm assuming that even with 2x 920W, they will only consume what the system needs, so I would benefit from PSU redundancy and more power available for expansion (at least for future disk additions).




I forgot: I recommend you to give at least 2 vCPUs to your windows servers. You will be thankful when the VMs are installing updates, performing "disk cleanups", scanning for malware, etc.

Initially I gave 2 vCPUs and 4Gb to all machines, but then dropped those to evaluate VM behavior during daily usage and plan for the new server minimum required resources.

I will give (at least) 2 vCPUs and probably some more RAM when VMs migrate to the new server, depending on the performance benefits I get from each VM.



Thank you for your opinions, I really appreciate them.

Keep them coming this way :)
 
Hi,

I would do this:

- 2 small hdds like 2x 2 Tb rotational disks: used for proxmox os and backup for your vms (debian + mdraid, or hw raid)
- 5 nvme disks using a raid10 (2 striped mirrors + 1 spare) - enterprise grade
- use for your mssql server ubuntu server - it works fine and you will use less ram, less iops, less storage space ...
- alocate at least 4 GB for the bigger sql instance (best results when the entire db can be fit in ram)

Good luck / Bafta !
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!