M.2 SSD Cache and ZFS Configuration

Ryan0751

Active Member
Feb 13, 2018
8
0
41
48
Boston, MA
I'm building out a Proxmox box.

I've purchased 2 Samsung 860 Pro 1TB SSD's. I want to use these in a ZFS mirrored pair to store most of my VM's and containers, and want the redundancy of the mirror.

I've also purchased two Western Digital 6TB RED disks. I'd also like to mirror these for redundancy.

I've also picked up a Samsung 1TB 960 Pro M.2 SSD.

What I would like to do use use the M.2 SSD as a cache. My question is, will I be able to use this as a cache for both the 860 SSD mirror as well as the WD spinning disk mirror?

Any tips for how I should set this up (not specific commands, but rather just the ZFS layout).

Thanks!
 
y question is, will I be able to use this as a cache for both the 860 SSD mirror as well as the WD spinning disk mirror?
Yes you can. it wont really do much (if anything) for you.

what you can do is use the m.2 ssd as a slog for both volumes, but the Samsung pro line of drives is... not really dependable. search the forums, there's quite a bit of discussion on the subject. If this is for anything that will cost you if it fails, you really want enterprise.
 
Oh hmm. This is for a personal system. Is it therefore not really going to be beneficial performance wise? I understand that for SLOG you need a battery backup to complete the writes if there's a failure, correct? What about using it for the ARC?
 
The way L2arc works is that the lookup tables all have to be held in memory, so the bigger the l2arc the bigger the ram commit; its more efficient to simply allow the system to use the RAM directly for arc. For a personal system, you simply cant generate useful load to benefit.

I understand that for SLOG you need a battery backup to complete the writes if there's a failure, correct
sort of. enterprise SSDs usually have a battery or capacitor for this- but even if you dont, consider your window of vulnerability. ZFS is a CoW FS so the worst that will happen is that whatever commits were in the slog would simply not take place and you'll end up with the file system in whatever state you had prior to its content. My original comment stands- whats your risk if that happens?
 
No. Lets deconstruct the layers of the solution so you can see where the potential painpoints are. zfs pools are made up of block devices, vdevs, and function groups.

your proposed pool(s) would look something like this:

pool1
mirror
-- 6TBdisk_1
-- 6TBdisk_2​
logs
-- nvme_part1​

pool2
mirror
-- Pro860_1
-- Pro860_2​
logs
-- nvme_part2​

for both pools, the mirror vdev can handle one disk failing without immediate consequences. While degraded, you're vulnerable to file system corruption, damage and up to total data loss due to disk medium faults, etc since you no longer have parity. This is the reason disk quality is so important here because you're very vulnerable with only a single disk loss- and you remain vulnerable until the disk is replaced and RESILVER IS COMPLETE.

slog quality does not mitigate this; in your case, you'll be introducing a single device dependency on slog CONTENT, eg in case of slog device failure or loss you impact whatever commits were not written to disk to both pools.
 
I'm sorry, I am new to ZFS so am reading as much as I can but still have a ways to go...

If I just do:
pool1
mirror
-- 6TBdisk_1
-- 6TBdisk_2​

pool2
mirror
-- Pro860_1
-- Pro860_2
So here I've eliminated the logs disk on the M.2 entirely. If a disk failure occurs, I should have a fully functional mirror disk, why would data integrity issues creep in in this case?

I can see how if I had a cache drive failure, that would impact potentially both disks.
 
Can he use those two pools?

Sure, one fast and one slow.

where is the proxmox installation going?

In general: where you want. In practice on the first two drives with their pool. So rearranging your SATA cables will help.

I can see how if I had a cache drive failure, that would impact potentially both disks.

Cache, as in L2ARC would not, because you only read from there, SLOG on the other hand would be a problem.

So here I've eliminated the logs disk on the M.2 entirely. If a disk failure occurs, I should have a fully functional mirror disk, why would data integrity issues creep in in this case?

It's better than nothing, yet you have only one device left that cannot heal itself anymore, e.g. silent data corruption prevention will fail.
 
Do you have any links on how to get the proxmox install onto the zpool? I think I'm completely wasting a drive right now in one of my builds. I didn't realize you could boot into zfs.
 
You already have the pool? Then it's more complicated and there will probably not be tutorial about this, this is very, very uncommon. The common way is to just install PVE directly onto a new pool that is configures via the installer.

You should send/receive one pool into the other so that one pool is empty, then install on that pool or more precisely on the disks of that pool directly from the PVE installer.
 
You already have the pool? Then it's more complicated and there will probably not be tutorial about this, this is very, very uncommon. The common way is to just install PVE directly onto a new pool that is configures via the installer.

You should send/receive one pool into the other so that one pool is empty, then install on that pool or more precisely on the disks of that pool directly from the PVE installer.
I can just restart from scratch. I didn't really have anything on it. I think I found the official doc. This is awesome, I basically have another ssd to play with now.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!