New install - ZFS/Pass-Through Advice

sirfragalot

New Member
Feb 19, 2019
8
0
1
39
Hi

Apologies if I should be asking this via a different channel...but here goes.

I am setting up a new simple Home lab, mostly for serving Media, but also some dev work, and virtualisation - possibly with some PCIe Passthough for very occasional light gaming.

I intend on running Traefic, Emby, SAB, Radarr, Sonarr, Transmission, etc in Docker Swarm (in a 3x VM node swarm)

The rig I intend on setting this up on is as follows;

i5 8400
32GB DDR4
500GB PCI-e 4x NVME Drive
16GB Optane PCI-e 2x Drive
4 x 4TB SATA Spinners.
6GB GTX 1060
PCIe Intel Quad NIC

So, my 1st set of questions are regarding the ZFS setup...My thoughts were:

raidz1 pool consisting of the 4x 4TB spinners.
16gb Optane as ZIL (SLOG)
500GB NVME as L2ARC

Does this seem a reasonable layout?
Regarding the L2ARC: Would this be a waste? Would it be better to just dedicate this to VM storage? Could I partition this and dedicate 250GB to L2ARC and the other 250GB dedicated to a couple of VMs?

My logic behind the 500GB as L2ARC would be that any VMs (of similar type, e.g. Windows, Centos) would benefit from the cached files? But I may have got this completely wrong!

2nd Question:
I know I should be able to pass-through the GTX1060; but could I pass through the 500GB NVME in a similar manner? Or could I just pass through a partition of that drive if I dedicated half of it to the L2ARC as per the above?

3rd Question:
My next plan is to bond the Quad Intel NIC ports using 802.3ad (I have a Fortinet 124E Switch stack)...Would it be a reasonable assumption that I could leverage this to improve the throughput of an iSCSI / NFS share configured within Proxmox for other clients on my network accessing media on the zpool above? I understand that I wont get 4Gbps, but should see some increase? If not on 1:1 (server:client) connection but definitely on a 1:many connections right? Or have I got this completely wrong?!

I have done a tonne of research before buying the kit, watched hours of YT vids, but my requirements are fairly specific and I also understand that there's lots of other concepts out there that I may not have considered which may be easier / more aligned with what I am trying to achieve... so any suggestions / advice gratefully received!

Thanks in advance!

Jon
 
Regarding the L2ARC: Would this be a waste?

Yes.

Would it be better to just dedicate this to VM storage?

Also yes.

Could I partition this and dedicate 250GB to L2ARC and the other 250GB dedicated to a couple of VMs?

Also possible, yet with only 32 GB RAM, I'd advise strongly against L2ARC, but best is to try out for yourself if it helps you, normally it doesn't. This is the best tradeoff of the three possibilities you mentioned, because if it does not work properly, just repartition and expand the other partition. You should use the last partition for L2ARC, that can be easily removed and the space added to the other disk.

My logic behind the 500GB as L2ARC would be that any VMs (of similar type, e.g. Windows, Centos) would benefit from the cached files? But I may have got this completely wrong!

Best cache is in memory, so bump that up. Unfortunately, L2ARC is not as simple as bbache, flashcache et al. which you really feel.

I know I should be able to pass-through the GTX1060; but could I pass through the 500GB NVME in a similar manner? Or could I just pass through a partition of that drive if I dedicated half of it to the L2ARC as per the above?

If you passthrough the NVMe, you cannot use it as L2ARC, so just a partition would be better, yet I'd go with a normal filesystem and present a "real virtual disk" (file-backed) to your VM. If you choose ZFS or QCOW2 you can have snapshots of your disk. You cannot have this on a passed-through disk.

I understand that I wont get 4Gbps, but should see some increase? If not on 1:1 (server:client) connection but definitely on a 1:many connections right? Or have I got this completely wrong?!

No, that is right, you can totally do that. You will see an increased throughput for multiple (different) clients.
 
Yes.



Also yes.



Also possible, yet with only 32 GB RAM, I'd advise strongly against L2ARC, but best is to try out for yourself if it helps you, normally it doesn't. This is the best tradeoff of the three possibilities you mentioned, because if it does not work properly, just repartition and expand the other partition. You should use the last partition for L2ARC, that can be easily removed and the space added to the other disk.



Best cache is in memory, so bump that up. Unfortunately, L2ARC is not as simple as bbache, flashcache et al. which you really feel.



If you passthrough the NVMe, you cannot use it as L2ARC, so just a partition would be better, yet I'd go with a normal filesystem and present a "real virtual disk" (file-backed) to your VM. If you choose ZFS or QCOW2 you can have snapshots of your disk. You cannot have this on a passed-through disk.



No, that is right, you can totally do that. You will see an increased throughput for multiple (different) clients.

Thanks great, thank you for taking your time to help.

So with respect to the SLOG placement on the Optane drive, is that a good idea and should I see a performance increase on writes?

Since my OP, I have done some more reading and Proxmox plus Xpenology VM with disk passthrough seems to be quite a common route people take. If I did this and just passed the 4 spinners and optane through, would this give me a more capable fileserver for iSCSI/NFS/Samba than say ZFS with TKL NFS / Samba? Or would it just give me btrfs (for easier expansion) and free up the RAM that would otherwise be used by ZFS?

Thanks again, and apologies for all the questions... I am all UNIX/Linux/Docker at work but have been Windows Server at home for the past 15 years, so want to make sure I not setting myself up for a massive headache for what should be a simple fileserver with Web application and media download capability.

Jon
 
Last edited:
If you passthrough the NVMe, you cannot use it as L2ARC, so just a partition would be better, yet I'd go with a normal filesystem and present a "real virtual disk" (file-backed) to your VM. If you choose ZFS or QCOW2 you can have snapshots of your disk. You cannot have this on a passed-through disk.

Also, would you suggest LVM Thin or ZFS (Single disk) for this?
 
So with respect to the SLOG placement on the Optane drive, is that a good idea and should I see a performance increase on writes?

You should see an increase in performance for synchronous writes, yes. Yet I cannot say how fast an Optane drive is, I've never used one before.

Since my OP, I have done some more reading and Proxmox plus Xpenology VM with disk passthrough seems to be quite a common route people take.

Pinning hardware and passthrough of multiple disks to a VM is in general a very bad idea in a virtualised setup, just omit the virtualisation at all and install it bare metal. You have the same problems with high availability, but less software in between and therefore less pitfalls you can run into.

Or would it just give me btrfs (for easier expansion) and free up the RAM that would otherwise be used by ZFS?

BTRFS does also use ram for caching - as all other filesystems :-D
BTRFS is also not stable if you want to use anything else than RAID0 and RAID1.
 
You should see an increase in performance for synchronous writes, yes. Yet I cannot say how fast an Optane drive is, I've never used one before.



Pinning hardware and passthrough of multiple disks to a VM is in general a very bad idea in a virtualised setup, just omit the virtualisation at all and install it bare metal. You have the same problems with high availability, but less software in between and therefore less pitfalls you can run into.



BTRFS does also use ram for caching - as all other filesystems :-D
BTRFS is also not stable if you want to use anything else than RAID0 and RAID1.

Thank you, that makes sense.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!