Storage Options

Well, you're golden then for equipment.

I'd do the following:

Server1: Fast Storage
14 Samsung SSD: Storage Array, Raidz2 (with SSD, you don't need the increase reads from raid10)
1 Samsung SSD Hot Spare
1 Crucial SSD - Zil log
USB Raid1 for OS

Server 2 : Slower Storage
12 2TB RAID10
2 Samsung SSD for L2Arch Cache
1 Crucial for Zil
USB for OS

With that setup, put the machines that need the speed on server1, put ones that don't need as much speed on server2, and have fun.
 
When you say 2 Samsung for the L2Arch, do you mean 2 in a RAID1 under ZFS?

Nope. You can specify drives as L2Arch in ZFS, allowing them to serve as a cache in front of the other drives. These will autopopulate based on your usage. No mirroring these really, since it's just a cache and will repopulate if you need to swap out a drive. I'd just specify the disks as L2Arch, and ZFS will take care of the rest.
 
Can anyone provide a link or some info on a step-by-step guide on setting up Napp-IT for ZFS and iSCSI and then connecting to it?

I've installed OmniOS and setup Napp-IT and messed with it a bit but not overly confident I've set it up right.

Any information highly appreciated!
 
All you need to do is:
Create a target in napp-it: Comstar->Targets->create iscsi target
[Optional] Create a portal group: Comstar->Target Portal Groups->create portal-group

After that add the following to your /etc/pve/storage.cfg
Code:
zfs: your_gui_name_for_storage
       blocksize 8k 
       target name_of_target_given_when_created_a_target_in_napp-it
       pool name_of_zpool
       iscsiprovider comstar
       portal ip_of_omnios
       content images
Ad blocksize: You can use any blocksize as 512B^n up to 128K. 4k or 8k is normally recommended but 8k gives a huge performance boost.
 
We're looking to hire someone to help us configure this. If anyone is interested, please PM me.
 
Not sure about your Virtual disk image requirement but i would like to suggest Ceph storage. Cannot really compare Ceph with ZFS based storage such as OmniOS+Napp-It or Freenas. Ceph provides RBD block storage which is supported by Proxmox.
Ceph scales very nicely, provides full high availability while eliminating all single point of failures. Only drawback is RBD supports RAW image type only. You can add other types such as qcow2, vmdk etc by setting up MDS Server in Ceph cluster.

It is big leap though coming from ZFS world. If you are sticking with ZFS, then OmniOS+Napp-It as suggested by mir is the most logical way really.

I personally find Ceph much more manageable and expandable with less effort. Probably because i spent lot of time with it. Several of my 40+ TB ceph storage has been working just fine for last year or so.
 
Can Ceph provide this speed which is measured on a zpool 4 x Sata disk (raid 10)

fio test.fio
iometer: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=32
2.0.8
Starting 1 process
Jobs: 1 (f=1): [R] [99.7% done] [71872K/0K /s] [17.1K/0 iops] [eta 00m:02s]
iometer: (groupid=0, jobs=1): err= 0: pid=26823
read : io=20480MB, bw=36402KB/s, iops=9100 , runt=576104msec

Test file:
[iometer]
direct=1
rw=read
ioengine=libaio
bs=4k
iodepth=32
numjobs=1
group_reporting
filename=/dev/vda
 
Can Ceph provide this speed which is measured on a zpool 4 x Sata disk (raid 10)

fio test.fio
iometer: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=32
2.0.8
Starting 1 process
Jobs: 1 (f=1): [R] [99.7% done] [71872K/0K /s] [17.1K/0 iops] [eta 00m:02s]
iometer: (groupid=0, jobs=1): err= 0: pid=26823
read : io=20480MB, bw=36402KB/s, iops=9100 , runt=576104msec

Test file:
[iometer]
direct=1
rw=read
ioengine=libaio
bs=4k
iodepth=32
numjobs=1
group_reporting
filename=/dev/vda

Using your fio paramter ran the test from inside a VM which is stored on Ceph RBD storage:

iometer: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=32
fio 1.59
Starting 1 process
Jobs: 1 (f=1): [R] [100.0% done] [146.1M/0K /s] [36.8K/0 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=1472
read : io=51200MB, bw=105568KB/s, iops=26391 , runt=496636msec

The ceph cluster has 3 nodes with total of 12 SATA Disk. No hardware RAID.
 
One of the thing i like about Ceph is as we add more HDDs and nodes in the cluster, overall performance of the cluster goes higher. 4 HDDs per nodes in my setup is just starting setup that i recently did using bonding. In next few days i am adding more HDDs to it. Will post the result of this same test using fio.
 
My 2c for DRDB. http://pve.proxmox.com/wiki/DRBD

DRBD might be beneficial if you run two cluster nodes and wish to have replicated shared storage.
I used DRBD for a number of projects and that what I can summarise from my experience.

pros:
+can squeeze maximum performance vs other clustered solutions (of cause after some tuning)
+lvm on network block device with all great features snapshots etc.
+well documented and has a big community.

cons:
-not scalable AFAIK there are no simple and safe way to setup more then 2 nodes.
-tuning is required
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!