ZFS with SATA and SSD

Alessandro 123

Well-Known Member
May 22, 2016
653
24
58
40
Anyone tried to use ZFS with SATA RAIDZ (or Mirrors) but using a mirror of SSDs are SLOG and L2ARC ?
Can this be a valid alternative to a SAS environment with the same SSD ?

If reads and writes are directly made on SSDs, having the storage as SAS or SATA doesn't change too much the performance as SSD would always be faster
 
I am not sure how the speed of SAS vs. SATA SSD's compares. Queue depth might come into play at some point -- but at the speed of SSD's it is probably less of an issue as it is with HDD's.

The more VDEV's you have the faster the pool will be. So the pool itself would be faster than a single mirrored SLOG / striped L2ARC, unless you get NVMe devices.
 
SAS has 15k rpm, SATA only 7200
Latency and random access is much lower on SAS due to higher rotation speed

But, what if i set a huge L2ARC for reads and 5GB as SLOG, both on SSD?
All reads and writes should be dirrcted on ssd thus the max speed would be the one offered by ssd, right?

Anyone using this in production?
 
SAS has 15k rpm, SATA only 7200
Latency and random access is much lower on SAS due to higher rotation speed

But, what if i set a huge L2ARC for reads and 5GB as SLOG, both on SSD?
All reads and writes should be dirrcted on ssd thus the max speed would be the one offered by ssd, right?

Anyone using this in production?

I don't think you fully understood how SLOG and L2ARC work.. some problems/limitations of your proposal:
  • SLOG is only for sync writes (you don't have to wait for your slow devices like you normally would, sync writes are written sync to the SLOG, and then async to the regular slow vdevs)
  • L2ARC eats some memory - having a very huge one is usually not a good idea!
  • L2ARC only speeds up access to cached data (it's an extension of the ARC after all) - for the usual VM use case, ZFS won't see a lot of cache hits itself because the OS inside the VM will already cache most of it anyway
  • mirroring the L2ARC does not make much sense IMHO
  • mirrored SLOG only makes sense if you are very paranoid (the SLOG is only ever used for real for the last 5 seconds of sync writes if your host crashes and it needs to be replayed!), and can actually be slower than a single vdev one
like @Steved said, you will probably see more of an improvement by spreading the load on more disks/vdevs. a fast SLOG does not hurt for sync intensive workloads, and an L2ARC does not hurt unless it's way too big, but they are not magic wands that will transform a slow pool into a rocket ;)
 
  • Like
Reactions: chrone
I don't think you fully understood how SLOG and L2ARC work..

This is sure :)

some problems/limitations of your proposal:
  • SLOG is only for sync writes (you don't have to wait for your slow devices like you normally would, sync writes are written sync to the SLOG, and then async to the regular slow vdevs)
  • L2ARC eats some memory - having a very huge one is usually not a good idea!
  • L2ARC only speeds up access to cached data (it's an extension of the ARC after all) - for the usual VM use case, ZFS won't see a lot of cache hits itself because the OS inside the VM will already cache most of it anyway
  • mirroring the L2ARC does not make much sense IMHO
  • mirrored SLOG only makes sense if you are very paranoid (the SLOG is only ever used for real for the last 5 seconds of sync writes if your host crashes and it needs to be replayed!), and can actually be slower than a single vdev one
like @Steved said, you will probably see more of an improvement by spreading the load on more disks/vdevs. a fast SLOG does not hurt for sync intensive workloads, and an L2ARC does not hurt unless it's way too big, but they are not magic wands that will transform a slow pool into a rocket ;)

So, best way would stay with SAS disks like on every other server that I have here.
My concern is about the missing hardware cache. The raid controller that I have here has 1GB BBWC.
Replacing the raid controller with ZFS, all writes are writethrough and not writeback. This is usually, a huge performance hit.

But, for a mailserver (not a virtual machine), SATA disks with ZFS with SLOG on SSD would be a big boost. Right ?
 
like I said, sync writes get faster with a fast SLOG, so an SLOG (with the right enterprise SSD!) will never ever hurt. L2ARC "steals" memory from ARC and causes additional writes, so it can make everything slower - but this really depends on the exact work load and hardware.
 
Usually, reads aren't an issue even with SATA disks. OS is able to cache files and so does ZFS even without L2ARC.
My question is about writes. SAS disks are way faster than SATA in random workload. What If I put an SLOG (Intel DC3610) and force all writes to be sync, thus forcing the use of SLOG ?

I would like, if possible, to get near-SAS with hardware raid and BBWC performances with ZFS with SATA disks and SLOG on SSD. Is this possible ?
 
Usually, reads aren't an issue even with SATA disks. OS is able to cache files and so does ZFS even without L2ARC.
My question is about writes. SAS disks are way faster than SATA in random workload. What If I put an SLOG (Intel DC3610) and force all writes to be sync, thus forcing the use of SLOG ?

that is just plain wrong and a very bad idea - async writes already get buffered in memory, which is way faster then any SLOG you can have. the SLOG is just a kind of buffer to decrease latency of sync writes without losing their "sync" property and can help handling bursts of sync writes (unless the burst is bigger than the SLOG can handle), but it does not magically make your pool faster..
 
The question was:

can I replace a XenServer hypervisor made with 6 SAS 15K disks RAID-6 with 1GB BBU, with a ProxMox server with ZFS RAID10, SATA disks and SLOG to an enterprise SSD without affecting the read/write performance ?

This is what I would like to do :)
 
No, you can't. The performance will be lower, except very few cases (high cache hit rate after warmup, highly compressible data and use ZFS compression).
As @fabian already told you, the writes don't matter if you use a SSD slog. They are either async and get written to disk on txg commits or sync and they get to SSD first.
 
Ok, so I'll go with SAS disks and not SATA.
But the missing of hardware raid write cache will affect the performance a lot ?

I can't slow down things
 
You have write cache. It is called either RAM for async writes or slog & RAM for sync writes.

That's not really true. Hardware raid are able to use the writeback, you can't do the same reliable without any battery unit. In case of failure, all data wrote in RAM would be lost.
 
That's for sure, but to reach similiar speed is SLOG enough or should I tweak something else?

I have to create a new server and I would like to use ZFS for the very first time but I don't want to create any bottleneck
 
Maybe few SSDs for slog. The controller cache is not SATA/SAS bound, it is PCIex, so many many times faster.
To be honest, I don't think you will feel a difference in real life.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!