Storage Options

VC-Mike

New Member
Aug 1, 2014
20
0
1
Hello,

I'm new to Proxmox and these forums so I apologize if I'm posting in the wrong place.

We're moving over from OnApp but in the process, we're wiping out our SAN (currently powered by OpenFiler) and are looking for a new solution. Does anyone else use a open-source storage solution, such as FreeNAS or something similar? Do you use NFS, iSCSI or others? If NFS, what filesystem do you use?

We're just looking for the most recommended options so we can pin them up and make a decision on the best solution for us.

Thanks so much in advance! :D
 
If you what something enterprise grade you should take a look at a Solaris based storage server using ZFS. I would personally recommend Omnios and if you need/want a gui you should consider napp-it -> http://pve.proxmox.com/wiki/Storage:_ZFS. At the moment ZFS in Proxmox must be created manually from the command line but full gui is soon to come. I use NFS over ZFS for containers and iSCSI over ZFS for KVM. FreeNAS is also a widely used option but ZFS iSCSI performance is not as good as Solaris based alternatives and certainly not enterprise ready. Alternatively you could consider a cluster solution of which Ceph has support in Proxmox -> http://pve.proxmox.com/wiki/Ceph_Server.
 
Lots of great information there, thank you very much!

Going to load OmniOS and napp-it onto a KVM based VM today to test it out, never heard of or used it before and we have a week to decide on the SAN platform so it'll give me plenty of time to figure it out I imagine.

Does anyone else have any suggestions?
 
I also saw that NexentaStor is recommended under the Solaris based stores. Is there any benefits between choosing either OmniOS or NexentaStor?

I tried a quick install of OmniOS and Napp-It earlier today but the Napp-It interface is quite large and would take a while to get used to. Is there anything free/open-source that is updated regularly?
 
NexentaStor is not licensed for commercial use without a valid license.

Can use define "[..] the Napp-It interface is quite large and would take a while to get used to"?
 
I think it's just because I've never used it before and it's considerably different to what I have used previously.

It was only a quick look on a system I couldn't really do much with.
 
Last edited:
My recommendation would be
1) ZoL on Proxmox native or exported
2) Gluster for live migration (or just use Storage migration)
3) If you use a SAN setup use 10Gbe or fiber

We run ZFS and have been in production for 2 years (since before KVM when we were on VMWare). Proxmox has an installation and setup tutorial which is pretty easy to follow
http://pve.proxmox.com/wiki/ZFS

Back when we were using VMWare we actually ran an internal OmniOS ZFS server that owned the controller through IOMMU (there are downfalls to this kind of setup, the passthrough controller that is) and one of our main reasons for moving to KVM was native ZFS (ZoL) on the host server. Not everyone agrees that combined compute and storage nodes are the way to go but we've had great success with it. For HA/DR we use Gluster on ZFS but we also run machines locally on ZFS, unless you have more than one compute node there are few advantages to running your VMs over a network protocol and honestly since Proxmox (QM) supports storage migration (yes this takes much longer) you don't actually need it even for live migration.

I agree that Napp-it is an extremely nice UI for ZFS, but after using Omni for 2+ months I far preferred CLI anyway (Napp-It does not work with ZoL since Linux CLI is so different from Solaris). When we ran Omni we used NFS instead of iSCSI (iSCSI is usually async by default btw).

A week is a super tight timeline to choose something as complicated as a SAN setup, my recommendation would be use ZFS locally until you figure out what you're doing for SAN and rely on storage migration (if you need live migration) until then.

Also, if you're using ZFS you need to understand some core concepts, try #zfsonlinux on freenode.
 
"Napp-It does not work with ZoL since Linux CLI is so different from Solaris". Latest Napp-it works on Linux too.
"
Current default release: 0.9f1 (also as ESXi VM available)
You can update from 0.9a3+ without a reboot for newest bugfixes,
download/ opt reload version with newest date, check changelog for details

Release: 0.9f1 with support for OmniOS 151010, Solaris 11.2, Ubuntu14LTS and Debian7
On Linux you cannot downgrade to older versions and should not upgrade
at the moment. Preview versions include new features and modified code
for current features that may contain bugs." http://www.napp-it.org/downloads/changelog_en.html
 
I stand corrected, I should have said "last time I used... etc"

I'm glad napp-it support is being moved to Linux. It looks like certain functions aren't and won't be ported to Linux http://www.napp-it.org/downloads/linux_en.htm

"Linux is not my preferred or main platform. Many napp-it features rely on Solaris projects like the CIFS server with Windows SID and ACL support, Comstar iSCSI or Crossbow network virtualisation. A napp-it version with similar functionality like on OmniOS is currently not planned."

However, for a first time ZFS experience being able to create pools, datasets and monitor via web interface is a pretty big deal. Hopefully some part of the Linux community will help further develop the best ZFS UI I've seen so far.
 
Comstar iSCSI or Crossbow network virtualisation. A napp-it version with similar functionality like on OmniOS is currently not planned."
Comstar is of course not likely to ever been supported since Comstar is not available on Linux.
However, for a first time ZFS experience being able to create pools, datasets and monitor via web interface is a pretty big deal. Hopefully some part of the Linux community will help further develop the best ZFS UI I've seen so far.
There is a work-in-progress to implement support for ZFS in OpenMediavault. The backend is more or less completed (by one other guy and me) only thing missing is the GUI part which is not something I am involved in on a leading level.
 
I've been doing a little research on ZFS with regards to RAID usage. I've read that a hardware RAID controller must be set to JBOD, is this correct?

As for the ZFS RAID, my own conclusion is to use RAIDz2. The array would be 10 or 14x 2TB SATA3 hard drives, what would you suggest?
 
I've been doing a little research on ZFS with regards to RAID usage. I've read that a hardware RAID controller must be set to JBOD, is this correct?

As for the ZFS RAID, my own conclusion is to use RAIDz2. The array would be 10 or 14x 2TB SATA3 hard drives, what would you suggest?

You set the Raid card to JBOD, or IT mode if you're on an LSI. You don't want hardware raid of any form in front of ZFS, as it handles that stuff itself.

Repeat: Hardware Raid + ZFS = No No no .


I currently use a "raid10" kind of setup for my ZFS pools. ZFS calls it Striped Mirrored Vdev, and you basically create as many mirrored pairs are you can, and then stripe across them. So I'd say Make 7 mirrored pairs, then stripe across them, giving you a decent capacity and failure protection.

I've been running an OmniOS +NappIt setup for 2 years, and am very happy with it. Check out Joyent's BMO's, which lists the exact hardware they use for optimal Solaris compatibility if you're building from scratch.
 
Yeah, I'm in 2 minds whether to do RAIDz2 or RAID10 (thru ZFS), I read that both are equal. Except RAID10 takes more space than RAIDz2.

I was wondering if this would be suitable at all?

16 Bay Chassis
---
OS - 2x SSD in RAID1 (HW or no HW?)
Storage - 13x 2TB Western Digital RED (ZFS RAID10 or RAIDz2)
Log Cache - 1x SSD in-with the Storage pool.

I heard it's beneficial to use an SSD for the logs on ZFS. Either way, we have all the hardware, just figuring out what to do, drive/raid wise is still to be decided.

Do you have a link to Joyent's BMO's?
 
Yeah, I'm in 2 minds whether to do RAIDz2 or RAID10 (thru ZFS), I read that both are equal. Except RAID10 takes more space than RAIDz2.

I was wondering if this would be suitable at all?

16 Bay Chassis
---
OS - 2x SSD in RAID1 (HW or no HW?)
Storage - 13x 2TB Western Digital RED (ZFS RAID10 or RAIDz2)
Log Cache - 1x SSD in-with the Storage pool.

I heard it's beneficial to use an SSD for the logs on ZFS. Either way, we have all the hardware, just figuring out what to do, drive/raid wise is still to be decided.

Do you have a link to Joyent's BMO's?

I'll take a look for the link to Joyents, I received them from a Joyent member in their IRC. If I find an updated version, I'll post here.

You're missing 2 *really* big parts of ZFS in your hardware there. You need you're L2Arch as well as your ram. The log cache drive you have will help, but isn't the perfect fit. Here is the list I've got for one server I'm running now, and am pleased with it:

2u Server, 12 hot swap bays
Rear Hot-Swap 2 2.5 SSD bays
Redundant 920w Power Supply
2 Xeon E502620 V2
32 GB ECC DDR3-1600
2 Intel 256GB SSD - L2Arch
1 Intel 60 GB ssd - ZIL log
2 Kingston DataTraveller R3.0 16 GB flash drive - OS in ZFS Raid1
12 Hitachi 3 TB hard drives SAS


That gives you plenty of processing to use DataDedup and compression, and plenty of L2Arch to allow super fast reads. Nearly 530GB will be cached. With ZFS Raid10, I'm having slightly over 13 TB usuable space, with a hot spare in there.
Flash drives as the OS may seem slow, but since all the logs are handed off to the ZIL drive, it really isn't a problem.
 
All our SSDs are Samsung EVO 840 Pro's @ 256GB each. So you're saying we should put 5 SSD's, 10x 2TB and 1x Hot spare?

We don't have rear facing SSD slots unfortunately, all bays are front facing. Our Spec is pretty much the same as yours except the larger chassis.
 
All our SSDs are Samsung EVO 840 Pro's @ 256GB each. So you're saying we should put 5 SSD's, 10x 2TB and 1x Hot spare?

We don't have rear facing SSD slots unfortunately, all bays are front facing. Our Spec is pretty much the same as yours except the larger chassis.

Nope, we're suggesting the following:

3 SSDs: 2 for L2Arch, 1 for ZIl log.
2 USB 3.0 drives, for OS

Samsung Evo's will work for this, I'm just partial to the Intels as I've never had a problem with them. A 256gb is way overkill for a Zil drive, but you've got it already, so it'll work.

with 10x 2Tb, 1x Hot spare, you'll have approx 9.6 TB usable in a Raid10 situation.
 
I've just checked our stock levels and we have 4x 120GB Crucial SSDs as well. So this is what we have in stock right now;

4x 120GB Crucial SSDs
24x 250GB Samsung EVO SSDs (Pulled from previous project which are planned to be used as RAID 1 arrays for the hypervisor (x6) OS's)
19x 2TB Western Digital RED

2x 3U 16 Bay Servers - 2 Storage SANs or 1 Backup SAN & 1 Storage SAN.
6x 1U Hypervisors - 2x SSD for RAID1 OS Array.

I'm trying to come up with a solid drive setup so we know whether to buy more 2TB drives. I thank you so much for all your help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!