Hard Drive Setup for Proxmox Node.

Jarvar

Active Member
Aug 27, 2019
317
10
38
Was looking for some advice.
I am going to be setting up Proxmox VE using this Super Micro AS-5019D-FTN4 with an EPYC-3251 processor.
Initially, I was going to setup using Intel D3-S4610 960GB Hard drives in ZFS Raid1. My old setup with servers setup on Bare Bones with Windows Server 2016 Essentials and Windows Server Standard2019. When I set these boxes up with Dell. The recommendation from the software company which was supporting a small office suggested Dual Raid 1 for the Operating System and Data Drives Separate.

What is the recommendations with Proxmox? I currently have an Intel NUC Setup with Proxmox VE 6 setup with Raid ZFS1 on an NVME drive and a 2.5" SSD. Somehow the setup worked when I set it up.

The thing is, what should the new setup be with the replacement box? Dual ZFS Raid 1 with something like a 480 GB SSD for the Proxmos OS? and then another raid zfs1 with the 960GB for the VMs and data?

Any help and input would be much appreciated.
Thank you.
 
Dual ZFS Raid 1 with something like a 480 GB SSD for the Proxmos OS? and then another raid zfs1 with the 960GB for the VMs and data?

As always: It depends.

PVE itsels uses only 4-8 GB of space, most of the time even less than that. I wouldn't want to run a system consisting of two ZFS pools with similar data in mind. Best performance is always one pool with two mirrors (RAID10 equivalent), best with same size SSDs, e.g. two 480 GB to have 960 TB total space to be used. If you want to split data and os (e.g. easier reinstallation etc.), I would go with two 16 GB SSDs for the OS. You could also split the VM os and VM data over the two pools you wanted, but this increases the complexity. Often, KISS is the best approach and this would be one pool.
 
As always: It depends.

PVE itsels uses only 4-8 GB of space, most of the time even less than that. I wouldn't want to run a system consisting of two ZFS pools with similar data in mind. Best performance is always one pool with two mirrors (RAID10 equivalent), best with same size SSDs, e.g. two 480 GB to have 960 TB total space to be used. If you want to split data and os (e.g. easier reinstallation etc.), I would go with two 16 GB SSDs for the OS. You could also split the VM os and VM data over the two pools you wanted, but this increases the complexity. Often, KISS is the best approach and this would be one pool.

Thank you for your input.
So the thing is with these units that the small offices want put in, they are half depth rack units. I put in a full Dell Poweredge T330 in the past which could accomodate a whole set of drives, up to 8 of them in the hot plug and play bay. However, their feedback was that it was too large. I had to conver it to a rackmount but it still took up 5U. So now I moved their rack to a place in the basement. Thing is, their old 24U rack would not fit down there. Also, my impression from them was that they wanted something smaller still. I moved towards a wallmounted 12U rack, which is not a full depth unit. I managed to put Proxmox onto an Intel NUC in the meantime.
So these Supermicro Units I am putting in only accomodate 4 x 2.5" Hard drives and maybe 1 NVME. I am limited in options.
I already have some 960GB Intel D3-S4610 drives.
I can either do like you say and make 1 pool with 2 mirrrors for a total of 4 drives. or I could follow your alternative solution and either make a separate boot OS on 240GB or 480GB Seagate 1551 Nytros I am thinking, paired with a set of 960GB mirrors for storage.
I can't really find high endurance 16GB drives for the OS only and space is at a premium with these embedded units...
 
I can't really find high endurance 16GB drives for the OS only and space is at a premium with these embedded units...

Yes, high endurance is the problem. I use old SM863 with 128 GB for that, total overkill. In my rack servers I just let it be served by old 146 GB SAS drives in hardware raid 1. I do not boot my PVE server often and it's server hardware, which normally takes extremely long to do bios post etc, so that the longer PVE boot times (in comparison to SSD) are negligible.

An alternative to a NUC is also HP Microserver, which is a mini cube with server hardware, so you're also a little bit more "desktopy". All vendors normally also have tower cases in different sizes and shapes. There also exist office server, which are very quiet.
 
Yes, high endurance is the problem. I use old SM863 with 128 GB for that, total overkill. In my rack servers I just let it be served by old 146 GB SAS drives in hardware raid 1. I do not boot my PVE server often and it's server hardware, which normally takes extremely long to do bios post etc, so that the longer PVE boot times (in comparison to SSD) are negligible.

An alternative to a NUC is also HP Microserver, which is a mini cube with server hardware, so you're also a little bit more "desktopy". All vendors normally also have tower cases in different sizes and shapes. There also exist office server, which are very quiet.
Thank you so much. I will look into these options. I have a setup of an old server which is running dual raid1 zfs with 240GB Intel and 960 GB Drives. Maybe I configured it wrong, but the 240GB divides the drive up into local and local-zfs with 102 GiB and 188GiB respectively. Then I added the 960GB drive as storage where I can save my VMs. Maybe this isn't the standard setup. Do I need to reconfigure it?

Any experience with the Seagate Nytro 1551 which seems to have a 3 DPWD Endurance?
My other alternative would be the Intel D3-S4610 but it's higher priced...
 
Hi,

What's the maximum depth the server can have?

Greets
Stephan
The Maximum depth accoding to their stats sheet is a
Max. mounting depth: 22.2″

I know there's also the option of a HPE Proliant DL20 which has a depth of around 17", but then you also have to stick with HPE Propriatary HDD hot swap caddies and their own branded drives...
I did have a Cube format like LnxBil also proposed, but the height would take up the remaining available slots in the rack and the rest of the equipment would have to fit without being rack mounted so I decided against that option. I believe the height is close to 4U anyways so if I had
thespace I would consider it, but embedded units seem to be a better fit. Here is a picture of it before I connected all the ethernet cables. There is router on top, but it doesn't show when the power is off since it's black.
IMG_2837x.jpg
 
but then you also have to stick with HPE Propriatary HDD hot swap caddies and their own branded drives...
I would also advice against HPE. We had it for years, and in my opinion it was not better than "middle-class" server hardware, just more expensive. And as far as I remember: For getting HPE firmware updates you need to have a valid support contract.
Since a few years we use Super Micro hardware from Thomas Krenn, and we're very happy. Maybe something like this could work for you?
With these four 2,5" hot swap cases you could install Proxmox on two small SSDs configured as ZFS Raid1, and make another two SSDs ZFS RAID1 for storage.
 
For getting HPE firmware updates you need to have a valid support contract.

Unfortunately so, yes.

Since a few years we use Super Micro hardware from Thomas Krenn, and we're very happy. Maybe something like this could work for you?
With these four 2,5" hot swap cases you could install Proxmox on two small SSDs configured as ZFS Raid1, and make another two SSDs ZFS RAID1 for storage.

For new hardware, Supermicro is great of course. HPE is cheaper on used hardware (at least to my experience) due to the huge volume of turnover, that's why I mentioned it.
 
Unfortunately so, yes.



For new hardware, Supermicro is great of course. HPE is cheaper on used hardware (at least to my experience) due to the huge volume of turnover, that's why I mentioned it.
Wow thank you both of your for writing me back on this forum.
I am setting this up for a small dental office and they don't really want any used hardware, so that's why I am going with SuperMicro.
I think that SuperMicro also has a Lights Out Alternative now, but I think it's paid, from my understanding.

One of the issues I have run into with the current setup on the Intel NUC I put in is the Raidzfs setup. It's configured with RAID1 ZFS using an NVME and a 2.5" SSD. The setup went okay, but I'm not entirely sure how that works since most raid setups should be between like devices. Althought they are both 1TB in size.
The other thing is, I didn't want to spend so much initially as it was my first time trying out ProxMox in an production environment. I utilized an Adata XPG SX8200Pro NVME and a Crucial MX500 which I had on hand.

I have had the unit running as their Proxmox Server with Windows Server 2019 Standard as a VM on it acting as DC and AD since October 7th, 2019.
Thing is that the Adata drive just budged from showing 1% SSD life used to 2% this week. The Crucial MX500 is now showing 92% SSD Life remaining which has got me somewhat on edge.

Thus I am going to build something more "Enterprise" grade and prehaps put the NUC in as a backup node or cluster. Might even just retire it to a workstation for regular office use.
So I do have a little concern about the smaller drive for OS. I have read some people overprovision their drives meaning that they only partition a portion of their drive so that their drives would not wear out as fast. I'm not sure if this is necessary with enterprise drives like the Intel and Seagate Nytro...
I'm running a homelab Proxmox server which uses old Dell hardware and their percentage used is still 0% or 1% this is after 1 year of production use and Windows 2016 Essentials Barebones. However once I installed Proxmox on it, it has not been used as heavily as the dental office which runs daily backups to NFS Storage.
 
I think that SuperMicro also has a Lights Out Alternative now, but I think it's paid, from my understanding.

It's always paid, also with HPE. Supermicro has similar functionality also for at least one decade.

It's configured with RAID1 ZFS using an NVME and a 2.5" SSD. The setup went okay, but I'm not entirely sure how that works since most raid setups should be between like devices. Althought they are both 1TB in size.

Technically it works, but it is only as fast as the slowest of the devices.

The Crucial MX500 is now showing 92% SSD Life remaining which has got me somewhat on edge.

Yes, all consumer and even prosumer hardware is not good for running ZFS in a PVE environment. I also had to learn that by hard.

I'm not sure if this is necessary with enterprise drives like the Intel and Seagate Nytro...

No, it's not. Enterprise grade SSD come with their internal overprovisioning, so that you do not need to do it yourself, e.g. a 1 TB drives comes with e.g. 128 GB extra space that is used internally and cannot be accessed.
 
It's always paid, also with HPE. Supermicro has similar functionality also for at least one decade.
If you're talking about IPMI:
It comes with most of Super Micro's server mainboards, and most of the functionality is free. As far as I can see, features like BIOS firmware upgrade or centralized management for a lot of servers require a kind of paid subscription.
 
And relating to the choice of HDDs/SSDs: When you go through this forum you will finally realize consensus: When you do virtualization for business purposes, buy enterprise grade hardware and have a good backup strategy - end of story. :)
 
Last edited:
Our "calculation" goes like this: Thanks to the proxmox team we get an enterprise grade virtualization platform and safe a lot of money compared to the "big players". And a part of this savings we spent in nice (and enterprise grade) hardware. :)
 
  • Like
Reactions: Jarvar
Our "calculation" goes like this: Thanks to the proxmox team we get an enterprise grade virtualization platform and safe a lot of money compared to the "big players". And a part of this savings we spent in nice (and enterprise grade) hardware. :)


Thank you so much. So I am probably going to go ahead today and start putting together this unit.
I have settled on goin with Intel D3-S4610 drives. I am going to go with a pair of 240GB SSDs to mirro ZFS1 on the OS and another pair of 960GB SSDs ZFS raid 1 for the VMs and storage.
I know some have suggested having a possibly lareger raid for the VM and Storage, but this unit can only fit 4 - 2.5" drives and 1 NVME. I think it could fit another NVME if it was on a rise card, but then it couldn't fit all 4 drives.
Anyways I will keep 1 spare 240 drive and 1 spare 960 drive in case their is an issue and I need to replace one.

If this setup is successful, I will replicate it with another one of their offices in the new year, and then I can have the spare parts for both units.

I have a similar setup at home with their old Dell equipment and it seems to work well. It's a Del T330 which is a huge 5u rackmount or tower unit in comparison which the office wanted to make smaller. I had setup bare metal windows 2016 Essentials dual raid 1 setup iniitally. Now it's a homelab machine for my learning.

I found the Seagate Nytros don't look as durable as the Intels. The Intels enruance is showing the in PBW while the Seagates are TBW something like 1.4 PBW vs. even 1300 TBW. I still think it makes a big difference unless my conversion calculations are incorrect. I woould prefer to have the extra endurance. The Intels from the old Dell after a year in production still show maybe 1% usage or 0% usage. Even in the homelab, albeit I don't do daily backups like the production unit... I don't really know how that's possible, but I guess they are really over provisioned and they are enteprise drives. I don't think they are worth what Dell charges for them though. I think they cost more than double the price of what I would pay directly from Intel.
 
I have settled on goin with Intel D3-S4610 drives. I am going to go with a pair of 240GB SSDs to mirro ZFS1 on the OS and another pair of 960GB SSDs ZFS raid 1 for the VMs and storage.
Sounds like a very solid choice, congrats! If you're sure that about 900 GB of storage would be enough than go with it - why not? :) And even if you need more in three or four years: Backing up the VMs, replacing the SSDs with larger ones, setting up new ZFS based storage and finally restoring the VMs to this new storage wouldn't be a big deal, I think.
What did you choose as server hardware?
 
Sounds like a very solid choice, congrats! If you're sure that about 900 GB of storage would be enough than go with it - why not? :) And even if you need more in three or four years: Backing up the VMs, replacing the SSDs with larger ones, setting up new ZFS based storage and finally restoring the VMs to this new storage wouldn't be a big deal, I think.
What did you choose as server hardware?


Thank you Sherminator.
Well I know there is a concern about space, but we also have a Synology NAS there backups and 2 external USB Drives. However, I found that for the Windows Server 2019 VM which we are running on the ProxMox Server, we don't actually us up a lot of storage space. I had initially put them on 1 TB drives, or 960GB, whatever the actual size is. However, we didn't even come close to using up the whole space. So I reduced both as a VM for the sake of quicker backups and the ability to transfer the whole image offsite in the case of disaster recovery.
I have since allocated the OS of the VM to have a disk size of 100 GB and the Secondary Drive which houses the data to be 200 GB. Whe I vzdump the whole thing, it comes out to about 66 GB right now. Before it was something like 150GB. So I figure if I need more space in the future, I can always increase the size of the virtual disks before increasing the size of the physical disks.
Also who knows how much drives will cost in the future and also what capacities there will be. Unless I really need the space now, it's going to cost a premium.
the NAS is running 2 TB, raid mirror, but I recently got 6 TB 7200 RPM ones to replace it and they cost ~$200 Canadian which is much lower compared to SSDs even in the 1.92TB or 3.84 TB options.

As for the server, I have decided to go with the Supermicro AS-5019D-FTN4 model which is a short depth rack option at about 10 Inches deep and 1 U height.
it uses an AMD EPYC 3251 Processor with 8 Cores and 16 threads. I read some reviews on serve the home and looked at some of the bench marks. It is an embedded unit which I think means it has special use case scenarios like being low powered and smaller to fit in tighter places which is what we are looking for.
I was talking to a friend who works at AMD, but the GPU consumer side and he reminded me that it's all about what we want to use it for. The new Epyc Rome chips are nice, and the technology is great, but they are full on servers for large racks, not like a small office where space is at a premium and use is not all that intense.

The competition would be the Intel Xeon D models, but I read that in a lot of the benchmark tests, the EPYC 3251 scored quite well, matching close or even exceeding the Xeon D-2141 models. The Xeon-D 2000s ones have also had some issues with being a little more power hungry.
Anyways, I'm going to try and give AMD a try this time. They have been inovating and doing really well with thein Zen 3 processors as well.
Only thing is that the AMD EPYC 3251 doesn't seem to be used as much as Intels. One person noted using proxmox with their unit, but started getting errors. It was later determined to be a faulty memory slot which was fixed or repaired.
 
Seems like a little, but powerful piece of hardware with a good single thread performance - I don't know it, but I like it! :D We just built a three node PVE/Ceph cluster based on AMD EPYC 7351 CPUs - no trouble so far. :)
Do you have any further questions?
Wow thank you again so much.
I think I put a post up here a while. I haven't read too much about people with AMD Setups. Also on Serve the home, a lot of people have been using the Xeon D's like I mentioned. I think it's like a cult following and also their company pushes out those systems for Homelabs and people mostly using ESXI. One thing that is attractive about the Supermicr 5028D is that it comes in a cube like case with mini itx, but also allows for 6 drives. 4 which are hot pluggable in bays in the front. You can also lock the chasis. I originally bought one of those, but the cube format made it less convenient to fit into a rack system. Yes it's more narrow than 19" but height made it take up more space.
I figure, just like using Proxmox, as the decision makers, we can influence things even if it's only a little bit at a time. For example, using Proxmox, vs. Esxi, Hyper-V, Xenserver or something else, it supports something based on Debian Linux, which in turn has more people testing things out and giving feedback to one another. Using AMD also challenges Intel, which I hope it will more and more as time goes by.

I have not setup a cluster. I know it's best practice, but I don't know how feasible this is in a small business environment. Even virtualizing doesn't seem all that common in the smaller offices which I have seen or people I have spoken with... Maybe eventually, or something with high availability.
 
And relating to the choice of HDDs/SSDs: When you go through this forum you will finally realize consensus: When you do virtualization for business purposes, buy enterprise grade hardware and have a good backup strategy - end of story. :)


I would like your input on drives for the OS. I was thinking of starting another thread, but I realize this one is still here.
I have a Dell T330 setup with proxmox 6.1 To be fair, it has been a bare metal Windows 2016 Essentials server in a small business for about a year before I installed Proxmox on it. However, in the past two months, with what I think is very little use, the percentage of the Dual Raid 1 ZFS Proxmos installation is now at 98% compared to 99% when it first started. The second pair for ZFS Raid1 drives I have setup to host the VMs is still going at 100% life reamaining.

Maybe there is a better setup?
Anyways, with my setup I have a Supermicro 5019D-FTN4 server ordered, and I decided to create a smiliar dual setup of Raid ZFS1 with the OS on 240GB Intel D3-S4610 drives and another Raid ZFS1 setup for storage with a pair of 960GB Intel same drives.
What I am contemplating is wheather or not these will be durable enough?

According to Intel's Website, the Intel D3-S4610 240GB drives have an endurance rating of 1.4 PBW.
The Intel DC-S4500 which shows an endurance of 0.62 PBW. It is a Dell Certified drive. I don't know if that last part makes any difference.

Anybody can chime in here. Either I get a larger drive in the future that has better endurance. Or I just run it and monitor the life.

Again, I have an Adata XPG SX8200 PRO 1TB NVME in a NUC right now which has a life of 640TBW and shows 2% used after aprroximately 2 months almost 3.... I checked the smart stats and it's about 13.4 TBW so far. The Crucial MX500 which it is paired with only has around 300 TBW endurance though which is probably why the life percentage remaining is lower.


I see that the Corsair M510 seems to be the new kid on the block with higher endurance ratings and reasonably priced...It has a 5 year rating with 1700 TBW on a 960GB drive. I might have some numbers crossed, but I think it's pretty close. If the corsair numbers are correct, they look pretty good though... The Intel D3-s4610 960GB has a better endurance rating, but it's like double the price.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!