Suggestions for SSD Setup / Configuration

norsemangrey

Member
Feb 8, 2021
63
9
13
39
I am setting up a new Proxmox server that will function as a file-/media-server, running mostly LXC containers and maybe a 1-2 VMs.

I have 2 x Samsung 250GB 870 EVO SATA SSDs and 2 x Samsung 512GB 970 PRO NVMe M.2 SSDs to use for the server OS/System/VMs. I could really use some input on how utilize these disks in the optimal way. How to arrange and configure them, what to put where, for best system performance and reliability.

For instance, should I configure both sets of disks as redundant ZFS VDEVs using mirrored disks, using the regular SSDs for the Proxmox install and the M.2 disks for VM/LXCs. Maybe using a part of each vdev as a backup location for snapshots of the other. This is just an example. I am just in the beginning phases of learning Proxmox and I might not event have the full picture of what should go on the disks for the "system" part of the server. I have a pool of regular HDD drives for the actual file/media storage.

Any input, comments and suggestions are welcome :)
 
So I am reading some places that using ZFS on Samsung SSD drives will wear them out quickly. Is that correct? And does it apply to both the NVMe and the regular SSD? If so how should that affect my setup?
 
Even when proxmox is idling, it'll write approx 30GB/day.
Hence data center grade ssd are strongly recommended.

If you use proxmox for a homelab, then the SSD are probably OK, if you closely monitor the ssd wear.

As setup, I'd use zfs mirror.
But if you're just trying it out, then you don't need this.
A backup is much more important.
And if you don't have lots of RAM, then using ext4 might work better.
 
  • Like
Reactions: norsemangrey
Thanks for the reply.

I wanted primarily to use ZFS so that it would be easy to take snapshots and backups of my Proxmox install and the VMs/LXCs. (The storage pool will be also use ZFS, but those are industrial HDDs so should be not issue).

Even when proxmox is idling, it'll write approx 30GB/day.
Hence data center grade ssd are strongly recommended.

So when you say 30GB/day that is just on the Proxmox install itself?

Unfortunately I do not think I can afford data center / industrial grade SSD / NMVEs for the Proxmox install drives nor the VM/LXC drives. I used much of my budget for industrial grade HDD for the actual storage pool.

If you use proxmox for a homelab, then the SSD are probably OK, if you closely monitor the ssd wear.

It is for a home-lab and should not see too heavy usage. It will only be accessed by 2-3 users for file and media access / streaming, and probably not every day. How do I best monitor the SSD wear? Is it the same for the SATA SSDs and the NMVE SSDs?

As setup, I'd use zfs mirror.
But if you're just trying it out, then you don't need this.
A backup is much more important.
And if you don't have lots of RAM, then using ext4 might work better.

The server is intended to be permanent and on 24/7 so it is not just for testing, Would you still recommend a ZFS mirror even though it could wear out the disks faster than just using a single disk setup for both install and VMs/LXCs using ext4 (or could I set up I mirror with ext4 as well)?

With regards to backup, one of the points of using ZFS was for the snapshot capability which makes it easier to take incremental snapshots.

I currently have 32GB ECC RAM installed, but plan to expand to 64GB later.
 
You can monitor the disks with smartmontools. It's not that Samsung disks are worn out faster than any other disks, but the 9xx are consumer disks, so the write amplification will wear them out.
30GB/day is approximately what Proxmox writes in logs per day. This can be reduced a bit but the way the data is written (small chunks every few seconds) will not do good to consumer SSDs.
Additionally, you probably want to put the VM images on the SSDs as well, giving even more load/write commands on the disks, not to mention the IOPS that all these parallel processes impose.
So in the end you have to do the math if you want to buy new Samsung Evos every year or maybe just go with a pair of SM883 for the next couple of years.
 
  • Like
Reactions: norsemangrey
You can monitor the disks with smartmontools.
That is only a command line tool right? Are there any tools for more continuous monitoring that can maybe be logged over time?
It's not that Samsung disks are worn out faster than any other disks, but the 9xx are consumer disks, so the write amplification will wear them out.
Yes, I am beginning to understand that might be an issue. There seems to be a lot of different opinions on this though. I've been getting some input from e.g. the TrueNAS forum and the general consensus these seems to be that consumer grade disk with ZFS will do just fine for a long period.

I must say it is coming as a bit of a surprise. I mean, this is not a high performance system. Just a small home media / file server running a couple of VMs and LXC containers with a couple of users accessing them now and again. One would think that Samsung disk like the ones I have bought would do fine for this as they as not the crappiest disk one could buy. Some people claim they run Proxmox just fine on a USB, so it's easy to get confused.
30GB/day is approximately what Proxmox writes in logs per day. This can be reduced a bit but the way the data is written (small chunks every few seconds) will not do good to consumer SSDs.
That is definitely a lot. Is this irrespective of the amount of VMs and LXC containers that is running on the server?
Additionally, you probably want to put the VM images on the SSDs as well, giving even more load/write commands on the disks, not to mention the IOPS that all these parallel processes impose.
Well as I suggested in my post, I am thinking of using the 870 disks for Proxmox install and the 970 disks for VM/LXC containers, and in addition I will have a separate HDD ZFS pool for the actual media/file storage.
So in the end you have to do the math if you want to buy new Samsung Evos every year or maybe just go with a pair of SM883 for the next couple of years.
Yes that is definitely something I have to evaluate. If the disks are just going to die on me after some months or a year I might start saving up for something more reliable.
 
That is only a command line tool right? Are there any tools for more continuous monitoring that can maybe be logged over time?
There most probably are, but I don't know a nice tool for it. Smart logs are saved on the disks, though.

Yes, I am beginning to understand that might be an issue. There seems to be a lot of different opinions on this though. I've been getting some input from e.g. the TrueNAS forum and the general consensus these seems to be that consumer grade disk with ZFS will do just fine for a long period.
A NAS is not a hypervisor. Of course those disks have a high TBW value. But permanently hammering them with small writes is not what the were designed for.
Some of the disks out there have pages 20MB large. Since an SSD controller can't simply change a page, first all empty pages are used. After that, pages have to be read, changed and written again. Now, when you have another 20MB write load for a small 1KB log file update you can do the math yourself, what the 1000 TBW are worth.

I must say it is coming as a bit of a surprise. I mean, this is not a high performance system. Just a small home media / file server running a couple of VMs and LXC containers with a couple of users accessing them now and again. One would think that Samsung disk like the ones I have bought would do fine for this as they as not the crappiest disk one could buy. Some people claim they run Proxmox just fine on a USB, so it's easy to get confused.
You will be fine for quite some time, I guess. But don't forget the feature creep. You will have more VMs over time and every machine loads more IOPS and writes on your disks. Just keep an eye on it.

That is definitely a lot. Is this irrespective of the amount of VMs and LXC containers that is running on the server?
Most of it comes from the HA services and the rrd diagrams. They can be tweaked or disabled in order to minimize the writes. But yes, it is a bit.

Well as I suggested in my post, I am thinking of using the 870 disks for Proxmox install and the 970 disks for VM/LXC containers, and in addition I will have a separate HDD ZFS pool for the actual media/file storage.
Now that you have the hardware just give it a shot. If your disks run fine for two years then still nothing is lost. If they die earlier you can still replace them with something more durable.
 
Last edited:
Don't undertesimate the writes. My homeserver (only used by me) is writing around 8MB/s to the SSD while idleing with all VMs/LXCs running. And the SSDs got internal write amplification too, so its more like 24MB/s of writes to the NAND cells. So thats like 750TB per year.

If you want long term mibitoring of smart data you can use monitoring tools like Zabbix. But keep in mind... monitoring = lots of small writes...90 percent of that 750TB per year are just logs ans metrics colkected by zabbix.

My homeserver is using 64GB RAM and ZFS and is mainly used as media server too (where media is stored on HDDs on another NAS server).

Check the TBW of your SSDs and calculate how long they should survive. I replaced my 500GB 970 Evos because they would'nt survive a single year here
 
Last edited:
  • Like
Reactions: norsemangrey
Don't undertesimate the writes. My homeserver (only used by me) is writing around 8MB/s to the SSD while idleing with all VMs/LXCs running. And the SSDs got internal write amplification too, so its more like 24MB/s of writes to the NAND cells. So thats like 750TB per year.

Thanks for the input.

Do you have separate SSD ZFS pools for Proxmox and the VMs/LXCs? I'm wondering what the wear on the Proxmox SSDs would be vs. the SSDs holding the VMs/LXCs (with separate HDD pool for the actual storage).

If you want long term mibitoring of smart data you can use monitoring tools like Zabbix. But keep in mind... monitoring = lots of small writes...90 percent of that 750TB per year are just logs ans metrics colkected by zabbix.

Maybe I'm misunderstanding here, but if the monitoring tool is responsible for so much of the writes and thus wear and the SSDs why do you use it?

My homeserver is using 64GB RAM and ZFS and is mainly used as media server too (where media is stored on HDDs on another NAS server).

Check the TBW of your SSDs and calculate how long they should survive. I replaced my 500GB 970 Evos because they would'nt survive a single year here

My 970 PROs, on which I plan to run the VMs/LXCs has a TBW of 600. The 870 EVOs I was planning to use for the Proxmox install have only TBW of 150 though.

I'm beginning to wonder if I should skip my plan to use Proxmox and just go for an Ubuntu Server with Docker containers instead if it is actually the case that Proxmox will wear out SSDs this fast.
 
All the reporting, logging and cluster features come with a price. :) It's a trade off that you can only assess for yourself.

My 200 GB Intel DC 3700 that I use for the sysyem have a TBW of 3700, the 1.92 TB SM883 for the disks have >10000, each paired with a much smaller page size. I don't really care about all the small writes, knowing that the disks were designed to endure that kind of load.
 
  • Like
Reactions: norsemangrey
All the reporting, logging and cluster features come with a price. :) It's a trade off that you can only assess for yourself.

My 200 GB Intel DC 3700 that I use for the sysyem have a TBW of 3700, the 1.92 TB SM883 for the disks have >10000, each paired with a much smaller page size. I don't really care about all the small writes, knowing that the disks were designed to endure that kind of load.

True, but to do that assessment I need some knowledge and input ;) I have had my new ($2500) server-hardware standing unused now for 3 months because everytime I try to Google something going into details regarding my intended setup I find new limitations with the intended hardware and software configuration, either performance-, reliability- or security- vise. ... and it just supposed to be a simple home file/media server for few users. I've spent literally hundreds of hours trying to find the best way to configure the server.

Anyway, this was how I planned originally to do my disk setup with Proxmox, but if my SSD and NVME drives are just going to die on me within a year I guess I'll have to wait and save up for something better. My alternative was to go with an Ubuntu w/Docker solution, but unfortunately Root on ZFS and Docker does not yet play very nice together as I have understood it.
1618142728512.png
 
As soon as you have everything together, you'll find something else that you should have though of. Just get started, your intended setup doesn't look bad.
If you're reluctant because of the SSDs, then sell them and but some others. Or wear them out and buy some others then, it's up to you. :D
 
Thanks for the input.

Do you have separate SSD ZFS pools for Proxmox and the VMs/LXCs? I'm wondering what the wear on the Proxmox SSDs would be vs. the SSDs holding the VMs/LXCs (with separate HDD pool for the actual storage).
Yes proxmox has its own pair of SSDs and writes are not that bad. But I'm also not using ZFS for my boot drives because I want encryption and ZFS can't do that for boot drives. Because of that I'm using LUKS and mdraid raid1. But like someone already said, proxmox itself should write around 30GB per day. So that are around 11 TB per year and shouldn't be a problem even for small consumer SSDs. But thats the point why you shouldn't install Proxmox onto a USB stick, because 11TB per year is way too much for microSDs/USB sticks.

Whats really creating writes are all the VMs/LXCs and swap partitions ontop of ZFS. Virtualization and ZFS are great for reliability and security but it comes at the price of high write amplification. Here with my setup for each 1 MB of data written inside the VM around 20-30 MB of data are written to the NAND flash of the SSDs to store that 1 MB.
Maybe I'm misunderstanding here, but if the monitoring tool is responsible for so much of the writes and thus wear and the SSDs why do you use it?
Not using monitoring isn't a good idea. If you run alot of VMs and LXCs you just can't manually check each single VM/LXC on a daily basis.
You need to:
  • check the logs for errors
  • check the logs for signs of attackers
  • check that you are not running out of storage capacity
  • check that enough RAM is free
  • check that all services are running
  • check if packages need a important security fix
  • check if network is working
  • ...
Do that for 20 VM/LXCs and you are wasting 2 hours each day just to verify that everything is working. Thats why I use Zabbix and Graylog to collect metrics and logs. These aren't just collecting them, they also analyse every data if something of interest happens. When something happens I get an email and will look at the dashboard. Thats a nice list where I see all problems of all VMs/LXCs on a single page. Way easier if a monitoring tool automates all of that.

My 200 GB Intel DC 3700 that I use for the sysyem have a TBW of 3700, the 1.92 TB SM883 for the disks have >10000, each paired with a much smaller page size. I don't really care about all the small writes, knowing that the disks were designed to endure that kind of load.
My 970 PROs, on which I plan to run the VMs/LXCs has a TBW of 600. The 870 EVOs I was planning to use for the Proxmox install have only TBW of 150 though.
I also removed all my consumer drives (like my 970 EVOs) and replaced them with second hand Intel S3710 drives. They got over 30 times the write endurance so 750 TB written per year isn't a problem anymore if that SSDs can handle petabytes of writes. And second hand they weren't more expensive than my new consumer drives. But drives like that Intel S4610 aren't that expensive even if you buy them new and they still got alot of write durability (but not as good as the S3710/S3700).

New Intel S4610 480GB: 124,50€ (3000 TBW + Powerloss Protection)
New Samsung 870 EVO 500GB: 62,40€ (300 TBW, no PLP)
New Samsung 970 PRO 512GB: 127,90€ (600 TBW, no PLP)

Buying a pair of S4610 would have been a way better deal than the 970 PROs for the same price, even if the S4610 is only using SATA.

I'm beginning to wonder if I should skip my plan to use Proxmox and just go for an Ubuntu Server with Docker containers instead if it is actually the case that Proxmox will wear out SSDs this fast.
The problem isn't proxmox. It is virtualization, server workloads, parallelization, ZFS, small random or sync writes. If you use a Ubuntu server with Docker it isn't much better as long as you are running the same services and want the same level of reliability and security.
The point is simply that you bought consumer drives that are designed to be used by one person at a time and that person is only doing 1 or 2 things simultanious and isn't using advanced features like copy-on-write filesystems like ZFS.
If you get enterprise/datacenter grade drives these are designed for server workloads and can handle that. You will pay 2 or 3 times more for for such a SSD but if it will last 10 to 30 times longer then thats the way "cheaper" option.

If you didn't opened your SDDs yet I would try to sell them as new and get more appropriate SSDs. If you already used them you can do a test setup and monitor the smart values. If you see that these will die way too fast you can replace them later and use them on another system (I moved 6 consumer SSDs from my servers to my GamingPC) or try to sell them to get atleast some money back.
 
Last edited:
Thank you again for taking the time with a very thorough and clarifying reply. It is really appreciated!

Yes proxmox has its own pair of SSDs and writes are not that bad. But I'm also not using ZFS for my boot drives because I want encryption and ZFS can't do that for boot drives. Because of that I'm using LUKS and mdraid raid1. But like someone already said, proxmox itself should write around 30GB per day. So that are around 11 TB per year and shouldn't be a problem even for small consumer SSDs. But thats the point why you shouldn't install Proxmox onto a USB stick, because 11TB per year is way too much for microSDs/USB sticks.

So at least I should be OK with using one pair of the SSD for for Proxmox itself for now? It's not that I absolutely need to use ZFS for this. I just want a setup that is somewhat reliable and at least makes it easy to restore if it should crash. Btw. what is a recommended backup strategy for Proxmox (i.e. just the host itself). Backing up just some configuration files or the whole disk?

Whats really creating writes are all the VMs/LXCs and swap partitions ontop of ZFS. Virtualization and ZFS are great for reliability and security but it comes at the price of high write amplification. Here with my setup for each 1 MB of data written inside the VM around 20-30 MB of data are written to the NAND flash of the SSDs to store that 1 MB.

Wow, I did not know that ZFS created that amount of write amplification. I understand that these are the drives where I should put the money towards. I am thinking maybe for now I should use just one of my 970 PROs without ZFS and get my system up and running, sell off the other 970, and save up for a couple of better Intel drives to replace the one I'll be running with later. From what I understand it is fairly simple to take backups of VMs and LXCs using snapshots so as long as I do that periodically until I get some Intel disks I hope I should be good.

Not using monitoring isn't a good idea. If you run alot of VMs and LXCs you just can't manually check each single VM/LXC on a daily basis.
You need to:
  • check the logs for errors
  • check the logs for signs of attackers
  • check that you are not running out of storage capacity
  • check that enough RAM is free
  • check that all services are running
  • check if packages need a important security fix
  • check if network is working
  • ...
Do that for 20 VM/LXCs and you are wasting 2 hours each day just to verify that everything is working. Thats why I use Zabbix and Graylog to collect metrics and logs. These aren't just collecting them, they also analyse every data if something of interest happens. When something happens I get an email and will look at the dashboard. Thats a nice list where I see all problems of all VMs/LXCs on a single page. Way easier if a monitoring tool automates all of that.

Is it common to use separate logging tools like these even for a small home server? I thought maye Proxmox itself took care of most of this?

I also removed all my consumer drives (like my 970 EVOs) and replaced them with second hand Intel S3710 drives. They got over 30 times the write endurance so 750 TB written per year isn't a problem anymore if that SSDs can handle petabytes of writes. And second hand they weren't more expensive than my new consumer drives. But drives like that Intel S4610 aren't that expensive even if you buy them new and they still got alot of write durability (but not as good as the S3710/S3700).

New Intel S4610 480GB: 124,50€ (3000 TBW + Powerloss Protection)
New Samsung 870 EVO 500GB: 62,40€ (300 TBW, no PLP)
New Samsung 970 PRO 512GB: 127,90€ (600 TBW, no PLP)

Buying a pair of S4610 would have been a way better deal than the 970 PROs for the same price, even if the S4610 is only using SATA.

Unfortunately the price gap between the consumer grade and enterprise hardware where I live seems to be much higher! And the used market is not much to brag about either :rolleyes: But I'll be looking more thoroughly going forward. In general, what are the spec's I should be looking for in a durable drive, whether it be NVMe or SATA?

The problem isn't proxmox. It is virtualization, server workloads, parallelization, ZFS, small random or sync writes. If you use a Ubuntu server with Docker it isn't much better as long as you are running the same services and want the same level of reliability and security.
The point is simply that you bought consumer drives that are designed to be used by one person at a time and that person is only doing 1 or 2 things simultanious and isn't using advanced features like copy-on-write filesystems like ZFS.
If you get enterprise/datacenter grade drives these are designed for server workloads and can handle that. You will pay 2 or 3 times more for for such a SSD but if it will last 10 to 30 times longer then thats the way "cheaper" option.

If you didn't opened your SDDs yet I would try to sell them as new and get more appropriate SSDs. If you already used them you can do a test setup and monitor the smart values. If you see that these will die way too fast you can replace them later and use them on another system (I moved 6 consumer SSDs from my servers to my GamingPC) or try to sell them to get atleast some money back.

They are opened and installed, but not used yet, as I haven't been sure yet of how to move forward with the setup. However, if it sounds like a not to stupid plan, I think I'll move forward with installing Proxmox on the 870 SSDs and using the 970 NVMe for the VMs, until I can get hold of a pair of more decent drives. I am open to other suggestion though :)
 
I also removed all my consumer drives (like my 970 EVOs) and replaced them with second hand Intel S3710 drives. They got over 30 times the write endurance so 750 TB written per year isn't a problem anymore if that SSDs can handle petabytes of writes. And second hand they weren't more expensive than my new consumer drives. But drives like that Intel S4610 aren't that expensive even if you buy them new and they still got alot of write durability (but not as good as the S3710/S3700).

New Intel S4610 480GB: 124,50€ (3000 TBW + Powerloss Protection)
New Samsung 870 EVO 500GB: 62,40€ (300 TBW, no PLP)
New Samsung 970 PRO 512GB: 127,90€ (600 TBW, no PLP)

Buying a pair of S4610 would have been a way better deal than the 970 PROs for the same price, even if the S4610 is only using SATA.
Unfortunately the price gap between the consumer grade and enterprise hardware where I live seems to be much higher! And the used market is not much to brag about either :rolleyes: But I'll be looking more thoroughly going forward. In general, what are the spec's I should be looking for in a durable drive, whether it be NVMe or SATA?

Actually, looking at the right place the S4610 is not too pricy at all. About $130. The S3710 is $560 though so waaay above what I can afford now.

Btw: do you have any recommendations for any reasonably priced NVMe? I would really like to use my M.2 slots as I will have more space for storage expansion later.
 
So at least I should be OK with using one pair of the SSD for for Proxmox itself for now?
Should be fine if you use consumer drives for the root filesystem as long as you don't store VMs on it.
what is a recommended backup strategy for Proxmox (i.e. just the host itself). Backing up just some configuration files or the whole disk?
Backup everything inside "/etc/pve" and a complete block level backup of the complete drive using clonezilla is also a good idea.
Wow, I did not know that ZFS created that amount of write amplification. I understand that these are the drives where I should put the money towards. I am thinking maybe for now I should use just one of my 970 PROs without ZFS and get my system up and running, sell off the other 970, and save up for a couple of better Intel drives to replace the one I'll be running with later. From what I understand it is fairly simple to take backups of VMs and LXCs using snapshots so as long as I do that periodically until I get some Intel disks I hope I should be good.
ZFS is not only about software raid. It features compression on block level, checksumming of everything and it can detect and repair corrupted data to prevent bit rot and so on.
Is it common to use separate logging tools like these even for a small home server? I thought maye Proxmox itself took care of most of this?
Proxmox can only show you used RAM, CPU utilization, smart status and maybe free storage. But there are thousands of parameters and logs that you might want to monitor to see problems early.
Unfortunately the price gap between the consumer grade and enterprise hardware where I live seems to be much higher! And the used market is not much to brag about either :rolleyes: But I'll be looking more thoroughly going forward. In general, what are the spec's I should be looking for in a durable drive, whether it be NVMe or SATA?
You want powerloss protection for data savety or otherwise your SSDs won't be able to use its volatile cache for sync writes. If it can't use the cache it won't optimize writes and the write amplification will be really high (you write 100x 1kb synchronous and 100x 1MB will be written to the NAND. With caching it could merge the 100x 1kb to 1x 100kb and will only write 1x 1MB).
And the higher the TBW the better. SLC is more durable then MLC which is more durable than TLC and QLC is the worst type of NAND.
Write/Read rates the manufacturers give you aren't that useful. With server workloads the drives must sustain permanent writes/reads and most consumer drives can only give you these high write/read rates for bursts of data. Try to write 100GB instead of 1GB at once and your 3000MB/s SSD will drop down to 50MB/s or something like that. Enterprise SSDs are often not that fast but the performance won't drop that much.
Actually, looking at the right place the S4610 is not too pricy at all. About $130. The S3710 is $560 though so waaay above what I can afford now.

Btw: do you have any recommendations for any reasonably priced NVMe? I would really like to use my M.2 slots as I will have more space for storage expansion later.
There aren't many enterprise grade M.2 SSD because the foot print is just to small and there is not enough room for all the capacitators for powerloss protection and the additinal NAND chips (if you buy a 1TB consumer grade SSD they might use 1.1TB of NAND to get a 10% reserve that you can't directly use. With enterprise drives it is not uncommon that a 1TB drive will got 1.6 TB or something like that).
Most enterprise SSD will use U.2 instead. You can mount those U.2 SSDs in a 2.5" slot and use a M.2 to U.2 cable.
 
Last edited:
There aren't many enterprise grade M.2 SSD because the foot print is just to small and there is not enough room for all the capacitators for powerloss protection and the additinal NAND chips (if you buy a 1TB consumer grade SSD they might use 1.1TB of NAND to get a 10% reserve that you can't directly use. With consumer drives it is not uncommon that a 1TB drive will got 1.6 TB or something like that).
Most enterprise SSD will use U.2 instead. You can mount those U.2 SSDs in a 2.5" slot and use a M.2 to U.2 cable.

Hmmm....maybe I need to change my plan a bit then. As I said I would really like to utilize the M.2 slots somehow and if there are not good enterprise M.2 SSD options that I can put my VMs on, perhaps I'll put 970 PRO in a ZFS mirror and install Proxmox on those (or is that just massive overkill??) and use another temporary SSD disk for the VMs until I get hold of some good SATA SSDs like S4610.
 
Hmmm....maybe I need to change my plan a bit then. As I said I would really like to utilize the M.2 slots somehow and if there are not good enterprise M.2 SSD options that I can put my VMs on, perhaps I'll put 970 PRO in a ZFS mirror and install Proxmox on those (or is that just massive overkill??) and use another temporary SSD disk for the VMs until I get hold of some good SATA SSDs like S4610.
Like I said, there are some M.2 SSDs (like the Micron 7300 PRO/MAX, Samsung PM983 or Intel P4801X) or you buy an additional M.2 to U.2 adapter with U.2 cable so you could use U.2 drives with your M.2 slot for more drive options.
 
  • Like
Reactions: norsemangrey
Like I said, there are some M.2 SSDs (like the Micron 7300 PRO/MAX, Samsung PM983 or Intel P4801X) or you buy an additional M.2 to U.2 adapter with U.2 cable so you could use U.2 drives with your M.2 slot for more drive options.
Thanks for the suggestions! Actually the Micron 7300 and the Samsung PM983 drive seems like good options. They are not to badly priced compared with the 970 PRO and have good TBW. The Micron 7300 PRO 960GB in particular is a good candidate. It costs around $190 and has a TBW of 1.9.

I hope that a a couple of Micron 7300 PRO 960GB in ZFS mirror for my VMs/LXCs in combination with two Intel D3-S4610 240GB in ZFS mirror for the Proxmox install will be a good combo.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!