SSD HW Raid 10 or ZFS Raid 10 SSD

sahostking

Renowned Member
Hi Guys

We currently have a few VPS servers on Proxmox with HW RAID 10 SATA Enterprise disks ( x 6 ).

We want to move them to same server specs but use SSDs instead.

Do you think ZFS is better option with SSDs for cPanel VPS servers hosting around 500 accounts on each?
or should we stick to good ol HW RAID Cards.

Also should we stick to what we have been using for years RAID 10?

Just want some advice and ideas
smile.gif
 
ZFS is ok. HW Raid is ok. SSDs of curse are ok, but only enterprise SSDs, this is important.

We sayed to HW Raid good bye, and sayed welcome to ZFS :)
I think ZFS has much more advantages. Datas are easy to handle. Ask google about ZFS and you will see.
https://en.wikipedia.org/wiki/ZFS
So it is up to you.
 
Last edited:
Hi sahostking,

I am driven by the same questions like you.

I decided to reply to this thread, because nobody is explaining what is changing around just using another filesystem.

ZFS needs much RAM to perform adequate. FreeNAS guys say not to install their system under 8GB RAM, other say after 32GB it becomes fun. This is what I can also confirm.

So if your host is a machine with max. 32 GB RAM you don`t have much options to balance the amount of RAM between use with ZFS or virtual instances.

You say you want to stay with your server hardware, but you didn`t told about its actual RAM size, the max. possible RAM size AND if it is possible/affordable for you to upgrade RAM. So on older hosts (maximal RAM) it seems better to stay on also "old school storage" (Controller with cache&bbu).

A ZFS driven NAS with 8GB RAM can handle to serve iso-images and backup-space via NFS. But dont go further to use it for iSCSI-storage in productive environments.
I have a NAS in testing with mirrored vdevs (4x 250GB Samsung 840pro, 25% overprovisioned + 1 intel 3700 to give 4GB SLOG) 32GB RAM an I am not satisfied at the moment. Next I will switch from NFS to iSCSI.

In sum I would reply to your question:
if your server has plenty of RAM or is capable to run with much (128GB) RAM and this is affordable and makes economical sense to upgrade on that machine then you can give ZFS a try.
If your machine is older and not capable to go with 32/64 GB reserved for ZFS you should look for which controllers can handle (enterprise) ssds (compatibility list).

Hope it helps any further.

vmanz
 
ZFS works fine also in low-memory environments, if you only want the features of ZFS, not its speed. Depends really on your workload and what you want to achieve.

I'm running my personal NAS with ZFS and yes, it's not as fast as it was with mdadm and ext4 (by a factor of 4), yet I'm fine with it because of its awesome snapshots, cloning, fast incremental backup, scrubbing, silent data corruption detection for etc.

To answer @vmanz question about what's changing HW-RAID vs. ZFS:
  • Managing disk failures is of course very different
  • Setup in Proxmox is different Directory vs. ZFS as storage type
  • ZFS provides thin-provisioning and cloning (Copy-on-Write)
  • OS-Level-based backups (not Proxmox-based Backups) are different, ZFS is much, much simpler and faster due to snapshot-based differential backup
  • ZFS is slower (not everywhere, but on average) and needs more RAM than hardware RAID , but has much, much more enterprise-like features e.g. silent data corruption detection and correction which are totally worth it!
 
Yes, I also think the aproach to achive storage has luckily changed and I am very happy in advance to get out of this controller/firmware/bbu pain. You can follow this also on the migrations the brands of controller-manufactors are migrating from new owner to newer owner.
... and we don't need the specifications of these filesystems here.

But those new (file-)systems like zfs, btrfs, refs, ... needs hardware environments, which are a kind of
contemporary.

The amount of money for a HW-Raid-Controller you needed before is now not for holidays, cigars or whatever. You need it for RAM.
And that is why you end up on a new host-system, because it is not possible nor affordable to renovate an older system with a new Board which can take those amount of memory modules and perhaps a new CPU - in short: buy a new system!

Besides I can take an 8 year old server with 16GB RAM, grab an HW-Raid Kontroller with BBU, install proxmox-VE4 on conventional storage and have fun with VMs as long as they can reside side-by-side in the host RAM.

Installing anything with ZFS on that machine is not acceptable for production any more. Maybe for storing files if you can afford the electric energy (or need the waste heat).

... and the second sentence in this thread reads "We want to move them to same server specs but use SSDs instead."
... that my be difficult.
 
I think that using ZFS on SSD vs. HW-RAID on non-SSD disks is still faster if you limit your ZFS to e.g. 1 GB-RAM or less. Does your controller supports JBOD? If not, you will still need HW-RAID support for creating volumes. ZFS should run on real disks, not on raid volumes.

But @vmanz is right, if you only stuff in your SSDs without changing the filesystem type, your RAM impact is the same and everything is faster. Hopefully, your RAID controller is capable of some SSD optimizations for this, if not you will not gain the full speed.

In the end it comes down to try it for yourself :-D
 
I believe that you don't NEED tons of ram for ZFS, you just WANT tons of ram for ZFS ( or any ram cache).
ZFS is also happy to have a min and max ARC/ram cache set.
ZFS is also happy with non ECC, however, any file system that you really, really care about should use ECC (there is nothing special about ZFS that requires ECC vs. any other system.

Note that there is also a big difference in what you are trying to do. Is this for a large enterprise setup, or something for a SOHO or for your own use?

Can you haul dirt around in a Cooper Mini? Sure you can. If you are planting in your garden or pots, then it will work fine. If you need to do mountain-top removal, then it's not going to go well. Form follows function.
 
I see alot of you are saying ZFS.

Currently we use HW RAID 10 with BBU writeback and have 64GB ECC RAM and cpu Intel Xeon E5-2620 2.4GHz which has 12cores.
6 x 2 TB SATA Enterprise Disks Western Digital in RAID 10 are used on the HW raid controller.

Now we have 3 x cPanel KVM vms on here with 8 VCpus and 12GB ram set each. But we noticing some disk iowait increases from time to time. We host around just around 500 accounts on each but with 800 or so sites on each server and it seems to run fine.

But we trying to improve server response times as at times it gets a bit slow and uptimerobot and nagios shows some slowness occuring.
When we check iowait is around 40.0 or so on average which is not good as usually its under 1. When I look deeper its when customers run backups via cpanel or restore or unzip large files. Eventually when iowait stays like this for some time CPU goes up and up from between 0.8 to 2.5 to high amounts like 12 or under 20 usually.

Will SSDs in ZFS RAID 10 help with this? What are your experiences?
 
Last edited:
I don't think Raid 10 is the best choice with SSD:
The two main reasons for Raid10 is to get the fastest read performance also if you don't have high-end 15K-SAS hard drives or SSDs, the second reason is the low need of CPU or Raid-Controller Performance because you don't have the parity things...
If you have very fast drives like your Enterprise SSDs and a fast Raid-Controller - maybe the same with ZFS - then I would recommend another Raid, for example Raid 5 or Raid6, (ZFS: Raid-Z1 or Raid-Z2)

best regards,
maxprox
 
That all depends on your requirements and workload. For best iops RAID10 is unbeaten. With the same disks the rule of a thumb is that you should expect 50% better iops with RAID 10 compared to any other RAID solution (excluding RAID 0 which is not really RAID anyhow).
 
Ok guys thanks.

I setup 2 servers just to test ZFS performance and put a few very busy VPS servers on it.

First server HW RAID 10 SATA Enterprise 7200rpm disks (x 6 ) with BBU (writeback) and second server ZFS (same disks also x6 in ZRAID 10)

VPS servers run fine on HW RAID but when I move them to server with ZFS they slow down alot.

Is ZFS only good with SSD as I see most here with SSDs? We want to host max 25 smallish VPS Servers per server - or could this CPU be the bottle neck? Intel Xeon E5-1620 3.5GHz

Also note each server had 64GB ECC Memory and gave 32GB of memory to ZFS and ARC.
 
ZFS is not necessarily faster than a HW raid. ZFS is an awesome SW raid. It is very flexible, reliable, fast, and allows exporting and importing. It allows a lot of features that HW raid doesn't.

It could be whether you make a raid 1 then 0 or 0 then 1.
From what I've read, it would be best to make them look like this

A mirror --| to create zvol 1
B mirror --|

C mirror --| to create zvol 2
D mirror --|

E mirror --| to create zvol 3
F mirror --|

Then Raid 0 zvol 1,2,3.
 
or best option ZFS raidz2 (Raid6) with x6 disks and no bottle neck.

Currently we have 25 VPS servers on hw raid 10 with 2TB SATA 7200 rpm disks. No issues. But I assume SSD on ZFS with 6 disks in raid 6(raidz2) would kill by far any raid 10 hwraid with bbu right? Due to the fact its not using SATAs but SSD instead?
 
second server ZFS (same disks also x6 in ZRAID 10)
What does zpool status show from your pool?
As MikeP explained you should make a raid10 of three mirrored pairs (3x2) instead of a raid10 with two three disk mirrors (2x3).
Also if your hardware raid has BBU you should add an Enterprise SSD as log to your zpool to make a decent comparison.
 
Any SSD is faster than any HDD*. (Maybe some really old SSD isn't..)

SSD are 100x to 10,000x faster than HDD for random IO and in IO/sec.
One SSD on SATA III will be faster than 8 striped HDD on SATA III, even though both can max out the bus due to the IO/sec.
RAM is then 100x to 10,000x faster than SSD for random IO.

Raid 0 on SSD is essentially useless. Raid 0 should be called Raid -1, as it doubles your chance of complete loss.

If you value your data you should use Parity (e.g. RAID 1 or higher, ECC), regardless of the recording tech/medium. Even with polyphasic encabulator hologram storage. :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!