Recommended SSDs/M.2s for a non critical mini server system

mariant

New Member
Dec 27, 2022
3
1
1
Hello guys!

After I discovered the option to migrate my HA from RPI to VM via Proxmox I was very happy and impatient to do it. So, I bought a recommended mini PC (Dell Optiplex 3050 -i7-6700T with 8GB RAM & 256 GB SSD) and I’m looking now to improve this hardware configuration in order to virtualise the following apps/services:

- HomeAssistant
- openVPN
- Plex
- PFSense
- Torrent server

The backup disk for photos, videos, downloaded files, etc will be either SSD connected via usb or a NAS Synology.


For the RAM memory I think it is simple. I can buy, 16GB or 32GB. My concern is related to the SSD and M.2, memories. My questions are:

1. It is recommended to use the m.2 for root drive (-local(pve)) and the 2.5” drive for local-lvl (images, containers, VMs) or it is enough to use only one disk?
2. Considering the huge number of writes on SSD/M.2 disks because of virtualisation or the use of ZFS file system, it is important to consider the TBW (Terabytes Written) parameter or it is not applied for a simple homeLab configuration like it was described above? Do you recommend to buy a SH enterprise SSD instead of a new consumer SSD?
3. It is possible to not use ZFS or ProxMox root drive is base on this filesystem?
4. Since it is not a critical system it is fine to not care about the SSD/m
M.2 disk and consider the Proxmox/VMs/LXC backups in case of a failur?

Please share with me your experience/knowledge…
Thank you !
 
Last edited:
  • Like
Reactions: Juertes
1. It is recommended to use the m.2 for root drive (-local(pve)) and the 2.5” drive for local-lvl (images, containers, VMs) or it is enough to use only one disk?
Same disk for PVE system + VM/LXC storage will be fine.

2. Considering the huge number of writes on SSD/M.2 disks because of virtualisation or the use of ZFS file system, it is important to consider the TBW (Terabytes Written) parameter or it is not applied for a simple homeLab configuration like it was described above? Do you recommend to buy a SH enterprise SSD instead of a new consumer SSD?
Really depends on the workload. But when working with ZFS it is highly recommended to buy a proper Enterprise/Datacenter SSD with powerloss protection. Consumer grade might be ok for a time, in case you don't write a lot to databases, but be aware that you will have to replace your SSDs more often (and setup PVE again and again and lose data in case you don't run everything in raid with parity/mirror). If you don't look at the price per TB capacity but the price per TB TBW those enterprise SSDs are actually cheaper than the consumer stuff: https://forum.proxmox.com/threads/c...for-vms-nvme-sata-ssd-hdd.118254/#post-512801

3. It is possible to not use ZFS or ProxMox root drive is base on this filesystem?
You can choose between ZFS (for single disk and raid) and LVM+LVM-Thin (only for single disk, unless you use it on top of HW raid).

4. Since it is not a critical system it is fine to not care about the SSD/m
M.2 disk and consider the Proxmox/VMs/LXC backups in case of a failur?
You should always have recent backups. Even when using snapshots and having raid with redundancy. It depends on how much money you got (buying a new cheap consumer SSD every year might be more expensive than just paying double or tripple the price for a durable SSDs that then might survive 5 or more years) and if you care when your smart home isn't that smart anymore, because HA might be offline for some days, until you bought new disks and found some time to install everything again.

You could test it first with the 256GB SSD you already got. Monitor your SMART stats and when it fails or you run out of space, have a look at the disk wear youz got so far. You can then decide what you want to spend for a new/bigger SSD.
 
Thank you @Dunuin for your prompt and useful answer. I will consider, as you mention before, the option of testing the workload of my homeLab configuration and base on that, I’ll decide later, how to continue. Since I didn’t order yet the consumer SSD (m.2 or 2.5”), maybe I was not so clear in my previous post, please let me know:

1. What interface Is better to use (m.2 on psiE 4x or SATA 3) to connect the SSD?
2. What brand/model I should consider. I was looking for WD RED as a compromise solution regarding the durability.

It will be also useful for me if you can recommend a SMART monitoring tool and how to use it for prevention (look for TBW…)

Thank you in advance!
 
Same disk for PVE system + VM/LXC storage will be fine.


Really depends on the workload. But when working with ZFS it is highly recommended to buy a proper Enterprise/Datacenter SSD with powerloss protection. Consumer grade might be ok for a time, in case you don't write a lot to databases, but be aware that you will have to replace your SSDs more often (and setup PVE again and again and lose data in case you don't run everything in raid with parity/mirror). If you don't look at the price per TB capacity but the price per TB TBW those enterprise SSDs are actually cheaper than the consumer stuff: https://forum.proxmox.com/threads/c...for-vms-nvme-sata-ssd-hdd.118254/#post-512801


You can choose between ZFS (for single disk and raid) and LVM+LVM-Thin (only for single disk, unless you use it on top of HW raid).


You should always have recent backups. Even when using snapshots and having raid with redundancy. It depends on how much money you got (buying a new cheap consumer SSD every year might be more expensive than just paying double or tripple the price for a durable SSDs that then might survive 5 or more years) and if you care when your smart home isn't that smart anymore, because HA might be offline for some days, until you bought new disks and found some time to install everything again.

You could test it first with the 256GB SSD you already got. Monitor your SMART stats and when it fails or you run out of space, have a look at the disk wear youz got so far. You can then decide what you want to spend for a new/bigger SSD.

Some years ago I abused a 500GB Samsung evo 850 consumer SSD as a caching drive in a small server, also using it for tmp storage when writing lxc backups.

It's still going after 3yrs of power on hours (it's older than that it spent a while not doing much) and it also spent some time as a games drive in my desktop, lotal LBA written is 121451325779

Yes enterprise drives have more longevity and i'd definitely recommend them for production uses, for home labbing just buy 2 consumer ones and use mirroring.
 
  • Like
Reactions: Juertes
1. What interface Is better to use (m.2 on psiE 4x or SATA 3) to connect the SSD?
M.2 is usually better but also really expensive if you want a enterprise grade SSD. I would prefer a enterprise SATA SSD over a consuemr M.2 SSD.
2. What brand/model I should consider. I was looking for WD RED as a compromise solution regarding the durability.
WD Reds are just expensive consumer SSDs. Either get a cheap TLC consumer SSD and replace it more often or get a real enterprise/datacenter SSD with powerloss protection. Intel, Samsung, Kioxia, Micro got a lot of enterprise SSDs to choose from. If you want WD, you would have to look for a "Ultrastar DC" and not for a "Red".
It will be also useful for me if you can recommend a SMART monitoring tool and how to use it for prevention (look for TBW…)

You can use smartctl -a /dev/yourDevice to seethe SMART attributes. Just write them down with a date. Do that multiple times and then you can look how much you have written over that timespan and interpolate.

Yes enterprise drives have more longevity and i'd definitely recommend them for production uses, for home labbing just buy 2 consumer ones and use mirroring.
Really depends on your workload and storage setup. My homeserver is writing 900GB per day while idleing, because of insane write amplification and 3 consumer SSDs already failed this year because of ZFS.
 
Last edited:
Thank you guy! After all your useful advice I bought Samsung 970 EVO Plus, 500GB, M.2. (with 72€). Considering that is a NVMe memory type and it is connected via M.2 PCIe express, I should activate the PCI passthrough option for a better performance?
From the ProxMox documentation I extracted this:

”PCI(e) passthrough is a mechanism to give a virtual machine control over a PCI device from the host. This can have some advantages over using virtualized hardware, for example lower latency, higher performance, or more features (e.g., offloading).
But, if you pass through a device to a virtual machine, you cannot use that device anymore on the host or in any other VM.”


So, considering that I‘ll crate more VMs/LXCs this feature will not be applicable for my configuration. Please confirm…

Thank you again
 
Last edited:
Unless you want to buy a dedicated NVMe for each VM that doesn'T make much sense.
 
My homeserver is writing 900GB per day while idleing, because of insane write amplification and 3 consumer SSDs already failed this year because of ZFS.
Where can I monitor that in Proxmox? Is that possible?
 
You can check with smartctl -a /dev/yourDisk what your disk is reporting. There is usually a SMART attribute that logs how much data got written to that disk. Sometimes even how much data got written to the NAND cells. Do this two times, calculate the difference and you know how much data was written to it in that timespan.
 
Last edited:
  • Like
Reactions: some1one
M.2 is usually better but also really expensive if you want a enterprise grade SSD. I would prefer a enterprise SATA SSD over a consuemr M.2 SSD.

WD Reds are just expensive consumer SSDs. Either get a cheap TLC consumer SSD and replace it more often or get a real enterprise/datacenter SSD with powerloss protection. Intel, Samsung, Kioxia, Micro got a lot of enterprise SSDs to choose from. If you want WD, you would have to look for a "Ultrastar DC" and not for a "Red".


You can use smartctl -a /dev/yourDevice to seethe SMART attributes. Just write them down with a date. Do that multiple times and then you can look how much you have written over that timespan and interpolate.


Really depends on your workload and storage setup. My homeserver is writing 900GB per day while idleing, because of insane write amplification and 3 consumer SSDs already failed this year because of ZFS.
900GB per day?? Wow. I saw a Youtuber who said his setup was doing 30GB a day and I thought that was a lot.

I'm planning my setup and want to go in the other direction - any other tips besides log2ram ?
 
Its actualy only writing 45GB of real data per day. But because of the factor 20 write amplification that causes 900GB of writes to the SSDs NAND.
But I replaced my consumer SSD (only 600 TB TBW and sw ZFS shredding them in no time...) with some good enterprise SSDs with 18000 TB TBW, so these SSDs should still last 55 years (or at least until the controller chip dies ;) ).
 
Last edited:
Its actualy only writing 45GB of real data per day. But because of the factor 20 write amplification that causes 900GB of writes to the SSDs NAND.
But I replaced my consumer SSD (only 600 TB TBW and sw ZFS shredding them in no time...) with some good enterprise SSDs with 18000 TB TBW, so these SSDs should still last 55 years (or at least until the controller chip dies ;) ).

18000 TBW? Can you please let me know the model of this SSD? This amount of TBW is insane, are you sure it is 18000 TBW or maybe 1800 TBW?
 
Last edited:
https://www.intel.com/content/dam/w.../product-specifications/ssd-dc-s3710-spec.pdf
https://www.intel.com/content/dam/w.../product-specifications/ssd-dc-s3700-spec.pdf:
1.2 TB disk with 24.3 PB TBW = 20,250 TB TBW per TB capacity because it is using good eMLC NAND chips and big spare area.

Intel Optanes with SLC NAND chips even got more TBW:
https://gzhls.at/blob/ldb/3/0/0/8/95b8420ec6d9a1b26fd31b865cddbdbd12ed.pdf
1679691411533.png
1.5 TB disk with 164 PB TBW = 109,333 TB TBW per TB capacity.

Those consumer TLC and QLC SDDs are really that crappy compared to this:
Samsung SSD 870 EVO 1TB: 600 TB TBW: https://gzhls.at/blob/ldb/5/b/e/4/6dd6661bcf9d7de150b37b7ceaa58f7170d3.pdf
Samsung SSD 870 QVO 1TB: 360 TB TBW: https://gzhls.at/blob/ldb/7/7/8/3/b647db0a9a9ffa715f0e11cf88c38bbf7021.pdf
 
  • Like
Reactions: rigel.local
https://www.intel.com/content/dam/w.../product-specifications/ssd-dc-s3710-spec.pdf
https://www.intel.com/content/dam/w.../product-specifications/ssd-dc-s3700-spec.pdf:

1.2 TB disk with 24.3 PB TBW = 20,250 TB TBW per TB capacity because it is using good eMLC NAND chips and big spare area.

Intel Optanes with SLC NAND chips even got more TBW:
https://gzhls.at/blob/ldb/3/0/0/8/95b8420ec6d9a1b26fd31b865cddbdbd12ed.pdf
View attachment 48416
1.5 TB disk with 164 PB TBW = 109,333 TB TBW per TB capacity.

Those consumer TLC and QLC SDDs are really that crappy compared to this:
Samsung SSD 870 EVO 1TB: 600 TB TBW: https://gzhls.at/blob/ldb/5/b/e/4/6dd6661bcf9d7de150b37b7ceaa58f7170d3.pdf
Samsung SSD 870 QVO 1TB: 360 TB TBW: https://gzhls.at/blob/ldb/7/7/8/3/b647db0a9a9ffa715f0e11cf88c38bbf7021.pdf

Wow! That is insane! Thank you so much for sharing. I thought no one produces MLC SSD drives now, due to high cost.

So 30 DWPD basically means that you can re-write the whole capacity of the ssd drive 30 times per day, everyday for 5 years? Wow. Not even asking how much these drives cost.

The maximum consumer "data center" grade ssd m.2 and u.2 drives I've seen had 1095 TBW (Kingston DC1000)

https://www.kingston.com/en/ssd/dc1000b-data-center-boot-ssd
 
Wow! That is insane! Thank you so much for sharing. I thought no one produces MLC SSD drives now, due to high cost.
Jup, you still get Optanes with SLC NAND but those eMLC NAND SSDs aren't produced anymore.

So 30 DWPD basically means that you can re-write the whole capacity of the ssd drive 30 times per day, everyday for 5 years?
Jup. and some of those optanes are even 60 DWPD.

Not even asking how much these drives cost.
Too much. Such a P5800X 1.6TB is 3131€. But still way cheaper than buying cheap consumer SSDs if you care about the "price per TB TBW" and not about the "price per TB capacity", as you don't need to replace the SSDs that often. I made a table showing this: https://forum.proxmox.com/threads/c...-for-vms-nvme-sata-ssd-hdd.118254/post-512801

But if you buy these SATA enterprise SSDs second hand, you often get them quite cheap, as everyone wants NVMe SSDs these days. I got my S3710/S3700 for about 125€ per TB capacity.
 
Last edited:
  • Like
Reactions: rigel.local
Jup, you still get Optanes with SLC NAND but those eMLC NAND SSDs aren't produced anymore.


Jup.


Too much. Such a P5800X 1.6TB is 3131€. But still way cheaper than buying cheap consumer SSDs if you care about the "price per TB TBW" and not about the "price per TB capacity", as you don't need to replace the SSDs that often. I made a table showing this: https://forum.proxmox.com/threads/c...-for-vms-nvme-sata-ssd-hdd.118254/post-512801

But if you buy these SATA enterprise SSDs second hand, you often get them quite cheap, as everyone wants NVMe SSDs these days. I got my S3710/S3700 for about 125€ per TB capacity.

Very good points in the linked post! Do you consider power loss protection (PLP) as also an important feature of server SSD drives? For example these consumer "data center" grade U.2 ssd drives have them:
https://www.kingston.com/en/ssd/dc1500m-data-center-ssd
https://semiconductor.samsung.com/u...40-zb-and-beyond-samsungs-new-pm9a3-is-ready/

But what exactly this power loss protection (PLP) does in real world situation of power loss? Will it really save data in transit or it is just protects against drive failure during power loss? Do you even need this feature if you use UPS?
 
Very good points in the linked post! Do you consider power loss protection (PLP) as also an important feature of server SSD drives? For example these consumer "data center" grade U.2 ssd drives have them:
https://www.kingston.com/en/ssd/dc1500m-data-center-ssd
https://semiconductor.samsung.com/u...40-zb-and-beyond-samsungs-new-pm9a3-is-ready/

But what exactly this power loss protection (PLP) does in real world situation of power loss? Will it really save data in transit or it is just protects against drive failure during power loss? Do you even need this feature if you use UPS?
PLP will allow the SSD to run for a few seconds without power from the PSU so it can quickly dump all data from the volatile DRAM cache into the non-volatile SLC cache. This will allow the SSD to cache sync writes in DRAM cache and therefore heavy increase the sync write performance and heavy decrease the write amplification while doing sync writes, as the sync writes can be cached in the DRAM, which will not wear over time like the SLC cache would do, so the writes to the NAND can be optimized.

Lets say the SSD is internally using 8K sectors but can only erase blocks of 64K. When doing 1600x 4K sync writes to a SSD without PLP it can't cache them in DRAM, so it really needs to write 1600x 4K blocks as 1600 single operations, one after another, directly to the NAND. This then will wear the NAND by 1600x 64K = 102.4M, as for each 4K block of data, 64K of NAND will need to be read, erased and overwritten.
A SSD with PLP on the other hand could cache those 1600x 4K blocks in DRAM cache and then write them as 200x 64K blocks from DRAM cache to NAND. this will then only wear the NAND by 200x 64K = 12.8M. So write amplification of a SSD without PLP would be way higher, because this optimization step is missing. And a sync write operation can be acknowledged as soon as the data got written to the volatile DRAM cache. So all those 1600 4K sync writes will be acknowledged nearly instantaneously. While the SSD without PLP could only start the next sync write after the previous sync write got written to the NAND.

Have a look at the Proxmox ZFS Benchmark paper: https://www.proxmox.com/de/downloads/item/proxmox-ve-zfs-benchmark-2020
1679695282292.png
The benchmark was doing 4K sync writes. And have a look at the IO/s column. All SSDs with PLP got a great IOPS performance while those two consumer SSDs without PLP are nearly as slow as the HDD at the bottom.

So buying a SSD without a PLP is like using a raid card without DRAM cache and BBU.
 
Last edited:
PLP will allow the SSD to run for a few seconds without power from the PSU so it can quickly dump all data from the volatile DRAM cache into the non-volatile SLC cache. This will allow the SSD to cache sync writes in DRAM cache and therefore heavy increase the sync write performance and heavy decrease the write amplification while doing sync writes, as the sync writes can be cached in the DRAM, which will not wear over time like the SLC cache would do, so the writes to the NAND can be optimized.

Lets say the SSD is internally using 8K sectors but can only erase blocks of 64K. When doing 1600x 4K sync writes to a SSD without PLP it can't cache them in DRAM, so it really needs to write 1600x 4K blocks as 1600 single operations, one after another, directly to the NAND. This then will wear the NAND by 1600x 64K = 102.4M, as for each 4K block of data, 64K of NAND will need to be read, erased and overwritten.
A SSD with PLP on the other hand could cache those 1600x 4K blocks in DRAM cache and then write them as 200x 64K blocks from DRAM cache to NAND. this will then only wear the NAND by 200x 64K = 12.8M. So write amplification of a SSD without PLP would be way higher, because this optimization step is missing. And a sync write operation can be acknowledged as soon as the data got written to the volatile DRAM cache. So all those 1600 4K sync writes will be acknowledged nearly instantaneously. While the SSD without PLP could only start the next sync write after the previous sync write got written to the NAND.

Have a look at the Proxmox ZFS Benchmark paper: https://www.proxmox.com/de/downloads/item/proxmox-ve-zfs-benchmark-2020
View attachment 48421
The benchmark was doing 4K sync writes. And have a look at the IO/s column. All SSDs with PLP got a great IOPS performance while those two consumer SSDs without PLP are nearly as slow as the HDD at the bottom.

So buying a SSD without a PLP is like using a raid card without DRAM cache and BBU.

Thank you so much for your time explaining all this.

I definitely need to read and learn a lot more about zfs storage principles and recommended hardware.
 
PLP will allow the SSD to run for a few seconds without power from the PSU so it can quickly dump all data from the volatile DRAM cache into the non-volatile SLC cache. This will allow the SSD to cache sync writes in DRAM cache and therefore heavy increase the sync write performance and heavy decrease the write amplification while doing sync writes, as the sync writes can be cached in the DRAM, which will not wear over time like the SLC cache would do, so the writes to the NAND can be optimized.

Lets say the SSD is internally using 8K sectors but can only erase blocks of 64K. When doing 1600x 4K sync writes to a SSD without PLP it can't cache them in DRAM, so it really needs to write 1600x 4K blocks as 1600 single operations, one after another, directly to the NAND. This then will wear the NAND by 1600x 64K = 102.4M, as for each 4K block of data, 64K of NAND will need to be read, erased and overwritten.
A SSD with PLP on the other hand could cache those 1600x 4K blocks in DRAM cache and then write them as 200x 64K blocks from DRAM cache to NAND. this will then only wear the NAND by 200x 64K = 12.8M. So write amplification of a SSD without PLP would be way higher, because this optimization step is missing. And a sync write operation can be acknowledged as soon as the data got written to the volatile DRAM cache. So all those 1600 4K sync writes will be acknowledged nearly instantaneously. While the SSD without PLP could only start the next sync write after the previous sync write got written to the NAND.

Have a look at the Proxmox ZFS Benchmark paper: https://www.proxmox.com/de/downloads/item/proxmox-ve-zfs-benchmark-2020

The benchmark was doing 4K sync writes. And have a look at the IO/s column. All SSDs with PLP got a great IOPS performance while those two consumer SSDs without PLP are nearly as slow as the HDD at the bottom.

So buying a SSD without a PLP is like using a raid card without DRAM cache and BBU.

After a bit of research and checking local prices I decided to go with not very expensive Micron 7450 MAX 800GB NVMe M.2 SSD. They check all the boxes:
a) Power-loss protection (Quote: "Ultracompact M.2 form factors well-suited for boot devices. A PCIe Gen4 M.2 22x80mm SSD with Power Loss Protection – specifically designed for server boot use.")
b) 3 Drive Writes per Day, 5 years warranty
c) Affordable - Micron 7450 MAX 800GB NVMe M.2 SSD costs 155 Euros locally.

My mother board has 2 M.2 NVMe slots and my plan is to buy 2 of these to run in ZFS mirror mode. The only doubts I have due to lack of any experience with proxmox are:
1) Is 800GB enough for proxmox OS and ISO files? (VMs data will be stored separately on U.2 high endurance SSDs)
2) Is 5,000 MB/s read and 1,400 MB/s write speed is enough for proxmox OS drive? Does proxmox OS even benefit from very high speed SSD or regular SATA SSD of 550 MB/s is more than enough for proxmox OS drive?

Context: I will use this proxmox server for home use and have about 8-10 VMs.

Again, thank you so much for your time.
 
Last edited:
c) Affordable - Micron 7450 MAX 800GB NVMe M.2 SSD costs 155 Euros locally.
I bought the 7450 PRO 960 for the same price. I'm very happy with the PLP and the much higher sync/sec than I ever expected.
My mother board has 2 M.2 NVMe slots and my plan is to buy 2 of these to run in ZFS mirror mode. The only doubts I have due to lack of any experience with proxmox are:
1) Is 800GB enough for proxmox OS and ISO files? (VMs data will be stored separately on U.2 high endurance SSDs)
2) Is 5,000 MB/s read and 1,400 MB/s write speed is enough for proxmox OS drive? Does proxmox OS even benefit from very high speed SSD or regular SATA SSD of 550 MB/s is more than enough for proxmox OS drive?
Proxmox does not need much (I use 10GB without ISO storage). It's just reading executable files during startup and writing logs frequently. You can easily run it from (very small) spinning HDDs (but not USB flash memory sticks because of the many writes).
Most important for VMs is the IOPS (and sync/sec) because you have many multiple virtual systems running on the same disks (instead of each system having their own physical disks).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!