Help! Need to order parts before the 1st

IxsharpxI

Member
Jun 18, 2019
26
0
21
31
Hey everyone thanks for the quick help. I have my current host on an r610 with the perc6i doing the raids. I have a raid1 boot with 2x 120gb ssds. And a raid 1 3x 500gb hdds, and another single 500gb hdd. However, i now want to do it right and get an HBA and finally have smart info. My plan is to Create a zraid1 boot out of the 2x 120s and then create another raidz1 4x ssd pool out of 1tb sk hyniks s31 drives. They are rated for 600tbw. I also looked at some cheaper wd blues that were 400tbw. I’m worried about wearing them out to quickly but understand i can run a Cronjob for manual trim. But performance/life wise would i end up being better off buying 3x 1tb 7200rpm or 10k spinners and a 500gb ssd cache? Or are all ssd pools safe now with the new trim features? Wanting to purchase before the 1st so i can write it off. What would you do?
 
Last edited:
Hey everyone thanks for the quick help. I have my current host on an r610 with the perc6i doing the raids. I have a raid1 boot with 2x 120gb ssds. And a raid 1 3x 500gb hdds, and another single 500gb hdd. However, i now want to do it right and get an HBA and finally have smart info. My plan is to Create a zraid1 boot out of the 2x 120s and then create another raidz1 4x ssd pool out of 1tb sk hyniks s31 drives. They are rated for 600tbw. I also looked at some cheaper wd blues that were 400tbw. I’m worried about wearing them out to quickly but understand i can run a Cronjob for manual trim. But performance/life wise would i end up being better off buying 3x 1tb 7200rpm or 10k spinners and a 500gb ssd cache? Or are all ssd pools safe now with the new trim features? Wanting to purchase before the 1st so i can write it off. What would you do?
I think you are mixing things.
stripe = like raid0 = 2 drives
mirror = like raid1 = 2 drives
raidz1 = like raid5 = 3 or more drives
raidz2 = like raid6 = 4 or more drives
stripped mirror = like raid10 = 4 or more drives

If you worry about SSD wearing buy some high endurance enterprise SSDs with SLC/MLC flash and powerloss protection so you get a better write amplification on sync writes. If you need good IOPS don't buy HDDs and a SSD for caching. Thats not working. Buy some SSDs instead. Trim is not the problem...the problem is bad padding because of mixed blocksizes, wirte amplification caused by virtualization and internal write amplification of the drives...especially if you are running DBs or something like that what uses sync writes. And ZFS isn't good for consumer SSDs because it is a copy on write filesystem.

Durable 1TB enterprise SSDs got 17000 TBW for example, so thats no comparison to the 600 or 400 TBW of these consumer SSDs.
If you don't have the money to buy these look for second hand enterprise drives. They often still got thousands of TBW left and aren't more expensive then new consumer SSDs.
 
Last edited:
Thank you for your response. You are correct i was mixing the raid types but you straightened it out. Im not sure of my iops requirement. i have a mix of windows, linux, and other vms. and few containers running docker. so i imagine alot. But they are currently on hdds with no cache as lvm and doing ok.

i guess my other option is to continue what im doing now.. leave the raid card in and present the boot mirror to pve (leave it lvm or ext4 or whatever it is) and then present another raid pool to pve with the 4 new ssds and just use lvm? would this reduce the wear on the ssds since its not using zfs? and i can just back things up to my truenas zfs for safe keeping?

I will also look at some used enterprise ssds but hoping to stay at $100 each
 
If you dont use zfs your virtual HDDs will be most likely qcow2 and that again is a copy on write filesystem.
If you use good SSDs ZFS isnt a problem. You could use somesthing like iostat (apt install sysstat && iostat) to measure how much IOPS and written data per day your HDD is handling. And then use that to predict how much data would be written to the SSD and how long it would take reach the TBW. Because of the way SSDs are working I think you need to multiply that value with atleast factor 3.

My VMs for example are writing 30GB of real data per day. Because of padding overhead and virtalization I get a write amplification from guest to host of factor 7. So these 30GB are causing 200GB per day to be written to the zfs pool. And the SSDs got an internal write amplification of around factor 3 too, so these 200GB per day are causing around 600GB per day to be written to the NAND flash of the SSD.
In total 600GB are written to the SSD to just store 30GB of real data.
With that in mind it is quite easy to reach the TBW of consumer SSD. They are just not build to run server worklosds.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!