Tips on Upgrading SSDs for my Proxmox lab

westrangers

New Member
Jan 2, 2025
10
0
1
Hi! I just started playing around with re-using my old gaming PC into a Proxmox home lab and so far I am having lots of fun. My next step is to upgrade storage since I am running a single 512GB Samsung EVO 870 SSD and hitting a limit right now.

Since my motherboard supports a single PCIe 3.0 x4 M.2 slot, I am thinking of getting a single 2TB Samsung 970 Eco Plus SSD and one 1TB Samsung 870 Evo SSD. I would add 870 Evo to the main storage and use 970 for the main backup (or vice versa). Does this plan make sense or am I overlooking something?

The only think is that I saw often WD Black SN770 2TB recommended as a good alternative and it's 20-30% cheaper than Samsung, however my motherboard does not support PCI Express 4.0 so I am not sure if it's a wise choice.
 
The good news is that in most situations PCIe 4.0 NVMe's will just operate at 3.0 speeds in 3.0 slots.

A lot of homelab folks here use off-lease off-warranty SATA disks on ebay like the Intel S4610 as they have an extremely high endurance (Terabytes Written, TBW) rating. There's a ton of different models to look at and check datasheets for the specs of them.

Always helps to do the research before getting used equipment, very rarely there's examples where an entire product line is affected by things like firmware issues. As for your new choices, I'm actually using the SN850x right now as an external drive in an NVMe reader. Hard to go wrong with new in-warranty disks from a trusted manufacturer.
 
Does this plan make sense or am I overlooking something?
Technically it will work, but running multiple VMs/OSs on consumer grade SSDs will eat up the endurance fast and often doesn't deliver the needed IOPS.
The price of enterprise class SSDs is obviously off-putting and you might think that it's overkill for a home lab. That's not the case.
With normal consumer class hard drives you can get by with server loads, it's just slower and doesn't necessarily affect durability. But with SSDs it's fundamentally different, there's also the endurance factor.

Two choices:
If you buy now cheap, you'll notice pretty quickly that the throughput is unsatisfactory and in a few months to a year you'll notice that the SSDs have quickly been written down, or you can listen to my advice and that of many others who already know this and buy SSDs designed for your purpose for the higher price, but you'll be completely satisfied.
 
The good news is that in most situations PCIe 4.0 NVMe's will just operate at 3.0 speeds in 3.0 slots.

A lot of homelab folks here use off-lease off-warranty SATA disks on ebay like the Intel S4610 as they have an extremely high endurance (Terabytes Written, TBW) rating. There's a ton of different models to look at and check datasheets for the specs of them.

Always helps to do the research before getting used equipment, very rarely there's examples where an entire product line is affected by things like firmware issues. As for your new choices, I'm actually using the SN850x right now as an external drive in an NVMe reader. Hard to go wrong with new in-warranty disks from a trusted manufacturer.
Cheers, I will look into possible options. I see someone selling a brand new INTEL D3 S4610, 1.92TB for 180$. Seem too good to be true I suppose?

Technically it will work, but running multiple VMs/OSs on consumer grade SSDs will eat up the endurance fast and often doesn't deliver the needed IOPS.
The price of enterprise class SSDs is obviously off-putting and you might think that it's overkill for a home lab. That's not the case.
With normal consumer class hard drives you can get by with server loads, it's just slower and doesn't necessarily affect durability. But with SSDs it's fundamentally different, there's also the endurance factor.

Two choices:
If you buy now cheap, you'll notice pretty quickly that the throughput is unsatisfactory and in a few months to a year you'll notice that the SSDs have quickly been written down, or you can listen to my advice and that of many others who already know this and buy SSDs designed for your purpose for the higher price, but you'll be completely satisfied.
So Samsung Evo 970 2TB plus is rated for 1200TBW. Isn't that plenty for home lab? I also just found a new INTEL D3 S4610, 1.92TB for 180$, but it would be less fast I pressume. So if both of them would easily last 5 years, is it really worth getting enterprise grade SSD (INTEL D3 S4610)?
 
Last edited:
There are some M.2 NVMe drives with PLP from Micron and Samsung (but some are 110 long). Those are great as they can cache sync writes (which is very important for ZFS speed and write amplification). After a few years using a 970 EVO Plus, I went for a Micron 5450 and fsync/sec skyrocketed and I never have to worry about wearing the drive down ever (with my desktop work-load).
 
There are some M.2 NVMe drives with PLP from Micron and Samsung (but some are 110 long). Those are great as they can cache sync writes (which is very important for ZFS speed and write amplification). After a few years using a 970 EVO Plus, I went for a Micron 5450 and fsync/sec skyrocketed and I never have to worry about wearing the drive down ever (with my desktop work-load).
Good point! I just found a deal on 2 new unsealed INTEL D3 S4610 1.92 TB for 325$ total. Seller seems to be trusted so I hope it's not some sort of scam, cause looks like a good deal.
 
running multiple VMs/OSs on consumer grade SSDs will eat up the endurance fast and often
If you are going to make this statement please make it plain that this only applies if you are using the ZFS system. Consumer ssd are fine and fast enough for multiple vm's in a small setup if you use ext4.
 
Consumer ssd are fine and fast enough for multiple vm's in a small setup if you use ext4.
In the broader context your statement is largely true - but still has the potential to bite you later.

Firstly you have to consider the (heavy) Proxmox/cluster logging. Secondly if your user case includes heavy writing to disk you may also end up "surprised". I find the most economical attitude with non-enterprise drives is to underuse the drives capacity (~70%) - then the drive will last.
 
  • Like
Reactions: Kingneutron
INTEL D3 S4610, 1.92TB for 180$. Seem too good to be true I suppose?
I don't have a price feeling for $, only for € ;)

So if both of them would easily last 5 years, is it really worth getting enterprise grade SSD (INTEL D3 S4610)?
If they last. You could buy a mix and dump your backups (this is writing in bursts) to a cheaper one and see how it goes. VMs, on the other hand, produce continuous write operations and I would always choose enterprise class. It is also difficult to estimate in advance how much write load I will have beforehand and on the other hand I don't have to do that and if I'm lucky I can reuse the expensive SSD in my next server after 5 years.

Consumer ssd are fine and fast enough for multiple vm's in a small setup if you use ext4.
Sure, but at some point you could regret it. Maybe he wants to use ZFS or Ceph in some weeks...IMHO it restricts too much and the IF list is just too long.
 
  • Like
Reactions: Kingneutron
I don't have a price feeling for $, only for € ;)


If they last. You could buy a mix and dump your backups (this is writing in bursts) to a cheaper one and see how it goes. VMs, on the other hand, produce continuous write operations and I would always choose enterprise class. It is also difficult to estimate in advance how much write load I will have beforehand and on the other hand I don't have to do that and if I'm lucky I can reuse the expensive SSD in my next server after 5 years.


Sure, but at some point you could regret it. Maybe he wants to use ZFS or Ceph in some weeks...IMHO it restricts too much and the IF list is just too long.
It's indeed in Europe, so 175€ for the unsealed INTEL D3 S4610 1.92TB. I actually just ordered two. If I got scammed then the seller has the most convincing legit looking profile ever (12 years active, 100+ 5 star ratings selling hard drives and similar stuff).
 
I just checked prices and yeah...175€ even in used condition is a really good price.

First thing to do check the firmware and upgrade:
https://www.thomas-krenn.com/de/wiki/Intel_D3-S4510_SSDs_und_D3-S4610_SSDs_Firmware_Update_XCV10110
https://www.thomas-krenn.com/de/wiki/Intel_D3-S4610_Series_SSDs
Thanks for the tips! One of them indeed needs to be updated.

So I just received both D3-S4610 1.9TB, one unsealed in the box and one sealed. I run the smartctl on unsealed one and it showed latest firmware, Power on Hours = 0 and Power Cycle Count = 4. Didn't test unsealed one, but that one shows XCV10100 on the label, which is the initial affected version. It's not strange to have new/unsealsed SSD with such an 'old' firmware?

So now that I got two new SSD, I am thinking on how make best use of them. I suppose I should still get a smaller SSD for Proxmox itself, and keep these for all the VMs and backups?
 
Last edited:
It's not strange to have new/unsealsed SSD with such an 'old' firmware?
Nope, usually just untouched spare parts on the shelf.

I suppose I should still get a smaller SSD for Proxmox itself, and keep these for all the VMs and backups?
I split everything as good as I can. Proxmox boot itself from cheap SATA SSDs triple mirror (because of cheapness, you never know, but 3 SATA-ports go poof), backups on "medium speed/quality SSD" or big HDDs and VMs on the good NVMes. It depends on how much free slots/ports you have available to fill...and as always...it's never enough :)
 
  • Like
Reactions: westrangers
Nope, usually just untouched spare parts on the shelf.


I split everything as good as I can. Proxmox boot itself from cheap SATA SSDs triple mirror (because of cheapness, you never know, but 3 SATA-ports go poof), backups on "medium speed/quality SSD" or big HDDs and VMs on the good NVMes. It depends on how much free slots/ports you have available to fill...and as always...it's never enough :)
Thanks for the tips! Right now I have Samsung Evo 870 512Gb where both Proxmox and VMs are running, and just got these 2 new SSDs, so thinking how best to make this work in an efficient and smart way. I am not running here some critical infrastrucutre, it's just a home lab which I want to grow and it would obviously suck if SSD would randomly die and I have no backups/mirrors.

Some typos (I'm guessing) with sealed vs unsealed) - please edit/clarify for my OCD!
No typos here :D. But I am not have been very clear. I received 2 SSDs - one sealed in official plastic wrapping and one in a plastic box which I could just open without any seal. I tested the second one (the one in the plastic box, which I call 'unsealed'), since if this is one is legit and indeed brand new, then chances are pretty high that the actually sealed one is new as well.
 
it's just a home lab which I want to grow and it would obviously suck if SSD would randomly die and I have no backups/mirrors.
Suggestion:
Evo 870 512Gb <- Boot proxmox from that and put your .isos on the remaining storage space (if this disk dies, no important data is lost)
2x D3-S4610 <- make ZFS mirror (even more read speed) with them, put your VMs onto it. (one disk can die without problems)

Put backups on some external space, maybe big USB3-HDD.
 
No typos here
I hate to argue - but carefully reread your OP.

Didn't test unsealed one, but that one shows XCV10100 on the label
So the "unsealed" one has XCV10100 on the label? You didn't test it, but the "unsealed" one has the latest FW which you tested with smartctl? So has the "unsealed" one been tested or not?

I'll be honest - if you have not made a mistake/typo - I NEED URGENT MEDICAL HELP!
 
Suggestion:
Evo 870 512Gb <- Boot proxmox from that and put your .isos on the remaining storage space (if this disk dies, no important data is lost)
2x D3-S4610 <- make ZFS mirror (even more read speed) with them, put your VMs onto it. (one disk can die without problems)

Put backups on some external space, maybe big USB3-HDD.
Would you suggest then to put both Proxmox and all the VMs on the ZFS mirror or only the VMs? I suppose having Proxmox also on the same SSD wouldn't hurt and would prevent me from having to get yet another SSD for Proxmox mirror.
I hate to argue - but carefully reread your OP.


So the "unsealed" one has XCV10100 on the label? You didn't test it, but the "unsealed" one has the latest FW which you tested with smartctl? So has the "unsealed" one been tested or not?

I'll be honest - if you have not made a mistake/typo - I NEED URGENT MEDICAL HELP!
Sorry, I am being dumb here. Yes, there was a typo indeed.. So my last comment still stands correct in terms of what I tried. I also just unpacked the sealed SSD and run SMART test and it shows 1.92TB, Power_On_Hours=0, Power_Cycle_Count=2, and original firmware (XCV10100). That other SSD which was in plastic unsealed box had latest FW - XCV10132. The labels on both indeed show the exact FW version as SMART reports. I will try to update other one to latest as well.
 
Would you suggest then to put both Proxmox and all the VMs on the ZFS mirror or only the VMs?
That would work, but is not the best solution. Two things to consider:
Proxmox itself has a certain IO load (more when updates are running), which is very low compared to the load of VMs, but it is there (which is why I literally burn up cheap SSDs for this, they are good enough for this). If you only have VMs on one mirror, then they have the full performance for themselves. This is the weaker argument, but it explains the principle. The stronger argument is that you might want a second 512G cheap/medium. Then you can make two mirrors. The 2x512G <- on it Proxmox, 2x2T <- VMs.
This way you have it cleanly separated and if something breaks at some point during an update or operating error in Proxmox, then you don't have to restore everything at once or laboriously take it apart. You then just delete the 2x512G mirror, reinstall Proxmox and import the 2x2T again. This means you have distributed the IO load sensibly (and the endurance as well), a disk can fail in both mirrors, both are cleanly separated and if something really bad goes wrong, you only have to restore one ZFS pool from your backup (time saving).
So that would be my tip... get another 512G (always check the firmware there too), then install proxmox fresh and then set it up accordingly.
Of course, you can also use each disk individually as storage. This gives you more available storage overall, but then no redundancy. It's never enough... :)
 
Last edited:
  • Like
Reactions: westrangers

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!