Hello All,
First time post here, please go easy on me if anything is amiss. Also sorry for the long post.
I'm a long time Windows user, with very little experience with ZFS or Linux in general. I'm looking into moving away from my Hyper-V setup and migrating over to Proxmox as my primary Hypervisor, and sort out my mix of storage while I'm at it.
Currently, I run Hyper-V and a Windows fileserver on a single baremetal install, then virtualise out the following roles: Domain controller, WSUS, AV management, CA, VPN server and network controller. This leads to a load of problems if there's ever a crash or unexpected shutdown as the Hyper-V server boots without access to the DC among other things.
I've recently moved all my storage to Windows Storage Spaces with Tiering, in order to take advantage of a recent 10Gb network upgrade. It works great for simple volumes without redundancy, frequently getting 400MB/s+ read and writes for a 250GB SSD caching an 8TB drive right now. The issue comes when looking at parity setups. Frankly, when it comes to parity in storage spaces, it's trash. Terrible write performance and needlessly convoluted ways to enable SSD caching to try to alleviate some of the performance issues.
So, to ZFS it is. It seems to just be a better option when looking at parity setups, as well as Proxmox being a lighter weight hypervisor compared to full fat Windows Server.
Here's my planned storage setup. This is still very much in planning and won't be happening very soon:
Boot/root: 2x 120/240GB SSD in RAID1
VM Storage: 2x 500GB SSDs in RAID1
Pool 1: 4x 6/8TB HDD in RAIDZ (might be 8TB, might be 6TB drives)
Pool 2: 4x 6TB HDD in RAIDZ
Cache: 2x 1TB NVMe SSD. Potentially in RAID1 or just a single drive for each pool.
I plan on virtualising a Windows Server install to act as the file server, so pretty much all of the data from the two pools are going to be assigned to that VM and then shares will go out from there. I'll also probably have another SSD array for some shares that don't need that much capacity, like redirected user profiles. I'm set on sticking to Windows based VMs. I partially use these VMs as a UAT environment for work as my job is entirely based on Windows server, save for the limited usage of ESXi and vSphere/Center.
Storage usage: It's mostly media and images, but also games (not large, latency sensitive, but rather visual novels and less demanding stuff) and various documents. My PCs storage and access pretty much everything from the file server.
My goal mostly here is to determine the best way of caching the hard drive pools to take advantage of my 10Gb network. I frequently write large 10GB+ files to the fileserver, so having a large SSD write buffer which then offloads to the HDDs would be ideal.
In terms of redundancy, I can tolerate losing whatever is in cache and hasn't been offloaded yet. Very little of the data is critical and can be re-obtained, and the stuff that is has multiple backups and doesn't change frequently. Still, losing data is not ideal, so I would like some level of fault tolerance. I've lost 4TB of data in the past when I had some HDDs in RAID0, which was a learning experience. I also have a mix of 2TB and 4TB drives that would be removed from the current file server and re-purposed into a backup server of sorts.
If you could help me with what would be optimal in terms of ZIL and L2ARC, that would be much appreciated, as well as any glaring errors you can spot.
System specs:
Intel Xeon E5-2680 v2 (10C20T)
Asus X79 Deluxe
32GB DDR3 1600MHz (this can be upgraded to 64GB if needed. Also, I'm sorry, but it's not ECC. Unbuffered DDR3 ECC is just too hard to come by at a reasonable price)
LSI SAS9211-8I (Flashed to IT mode. Maybe x2, as drives will be connected via 2 SAS backplanes, 4 drives from each backplane. Might have 2 so if one fails, I can move to the other)
Asus XG-C100C 10Gb NIC
860W Platinum PSU
4U Case with 2x SAS backplanes, hosting 4 hotswap drives on each.
First time post here, please go easy on me if anything is amiss. Also sorry for the long post.
I'm a long time Windows user, with very little experience with ZFS or Linux in general. I'm looking into moving away from my Hyper-V setup and migrating over to Proxmox as my primary Hypervisor, and sort out my mix of storage while I'm at it.
Currently, I run Hyper-V and a Windows fileserver on a single baremetal install, then virtualise out the following roles: Domain controller, WSUS, AV management, CA, VPN server and network controller. This leads to a load of problems if there's ever a crash or unexpected shutdown as the Hyper-V server boots without access to the DC among other things.
I've recently moved all my storage to Windows Storage Spaces with Tiering, in order to take advantage of a recent 10Gb network upgrade. It works great for simple volumes without redundancy, frequently getting 400MB/s+ read and writes for a 250GB SSD caching an 8TB drive right now. The issue comes when looking at parity setups. Frankly, when it comes to parity in storage spaces, it's trash. Terrible write performance and needlessly convoluted ways to enable SSD caching to try to alleviate some of the performance issues.
So, to ZFS it is. It seems to just be a better option when looking at parity setups, as well as Proxmox being a lighter weight hypervisor compared to full fat Windows Server.
Here's my planned storage setup. This is still very much in planning and won't be happening very soon:
Boot/root: 2x 120/240GB SSD in RAID1
VM Storage: 2x 500GB SSDs in RAID1
Pool 1: 4x 6/8TB HDD in RAIDZ (might be 8TB, might be 6TB drives)
Pool 2: 4x 6TB HDD in RAIDZ
Cache: 2x 1TB NVMe SSD. Potentially in RAID1 or just a single drive for each pool.
I plan on virtualising a Windows Server install to act as the file server, so pretty much all of the data from the two pools are going to be assigned to that VM and then shares will go out from there. I'll also probably have another SSD array for some shares that don't need that much capacity, like redirected user profiles. I'm set on sticking to Windows based VMs. I partially use these VMs as a UAT environment for work as my job is entirely based on Windows server, save for the limited usage of ESXi and vSphere/Center.
Storage usage: It's mostly media and images, but also games (not large, latency sensitive, but rather visual novels and less demanding stuff) and various documents. My PCs storage and access pretty much everything from the file server.
My goal mostly here is to determine the best way of caching the hard drive pools to take advantage of my 10Gb network. I frequently write large 10GB+ files to the fileserver, so having a large SSD write buffer which then offloads to the HDDs would be ideal.
In terms of redundancy, I can tolerate losing whatever is in cache and hasn't been offloaded yet. Very little of the data is critical and can be re-obtained, and the stuff that is has multiple backups and doesn't change frequently. Still, losing data is not ideal, so I would like some level of fault tolerance. I've lost 4TB of data in the past when I had some HDDs in RAID0, which was a learning experience. I also have a mix of 2TB and 4TB drives that would be removed from the current file server and re-purposed into a backup server of sorts.
If you could help me with what would be optimal in terms of ZIL and L2ARC, that would be much appreciated, as well as any glaring errors you can spot.
System specs:
Intel Xeon E5-2680 v2 (10C20T)
Asus X79 Deluxe
32GB DDR3 1600MHz (this can be upgraded to 64GB if needed. Also, I'm sorry, but it's not ECC. Unbuffered DDR3 ECC is just too hard to come by at a reasonable price)
LSI SAS9211-8I (Flashed to IT mode. Maybe x2, as drives will be connected via 2 SAS backplanes, 4 drives from each backplane. Might have 2 so if one fails, I can move to the other)
Asus XG-C100C 10Gb NIC
860W Platinum PSU
4U Case with 2x SAS backplanes, hosting 4 hotswap drives on each.