NAS Solution for backup

cbx

Active Member
Mar 2, 2012
45
1
28
Hi All

I know, it's not an new post and there is a lot of post of this,I have review a lot of them and still have doubt, I expose our situation and hope that someone can help us to choose the better option.

For now, my backup server, is an "old" dell R510, with 16G RAM, and an xeon quad core cpu (E5506), PERC H700, 8 disk of 2T SATAS in RAID5 and an 10Gb/s ethernet. The bigger problem, is that this server has an centos5 (so 0 optimization for NAS).
I have several problem each 7 days with our backups (NFS errors).

For this and to have more space (I need at least 32T and for optimal situation 48T), I plan to change it for an real NAS or renew disk of this server.

The second option, is much less expensive (8 disk red NAS of 4T is about 1500€ in amazon) that the other one (Almost 4000€ with 10Gb networks for exemple in nasexpert http://www.nasexpert.fr/?page=produ...isque=6:6:6:6:6:6:6:6&accessoire=CARTE_RESEAU), but I don't know if the second option is really optimal (I will have 50 VPS that will copy our backup to the NAS, from 25G to 100G each Virtual, with an 2.5G of networks) and what OS choose in this case (I read that freenas is not very good, and nas4free or Omnios are better).

Can you help me?
Thanks a lot.
 
You wanna buy it readymade or build it ?

ps: have a look at openmediavault while you are at it.

Well, I search an good solution that really work, with the lower cost possible (as anyone in this world :) ).
I suppose that readymade solution will be better, but the cost is much more highter, so it will be the option only if the first one it's not possible or optimal...
I will look the openmediavault option, you consider it better as nas4free or similar?
 
and what OS choose in this case (I read that freenas is not very good, and nas4free or Omnios are better).
I can't speak to Omnios at all. NAS4Free is what used to be FreeNAS, before iXSystems bought the FreeNAS name and developed a whole new product under it. FreeNAS seems quite popular, and has worked well for me. It's also the basis for a fairly popular commercial product, TrueNAS. In what way(s) did you read that it's "not very good"?
 
I can't speak to Omnios at all. NAS4Free is what used to be FreeNAS, before iXSystems bought the FreeNAS name and developed a whole new product under it. FreeNAS seems quite popular, and has worked well for me. It's also the basis for a fairly popular commercial product, TrueNAS. In what way(s) did you read that it's "not very good"?

Well, I have read various post:

- http://forum.proxmox.com/threads/21205-iSCSI-Reconnecting-every-10-seconds-to-FreeNAS-solution
- http://forum.proxmox.com/threads/16355-Proxmox-VE-3-1-slow-NFS-reads

Moreover, it seem dificult to have an good hardware compatibility on an server with raid card (for exemple, on an R410 with 4x600SAS and hardware RAID, the disk are not detected).
I don't test especially this OS, but for now, "not seem" the better option for what I read...
 
It's true that FreeNAS doesn't like hardware RAID controllers, though this is more a ZFS thing than specifically a FreeNAS thing. If replacing the H700 with an HBA isn't an option, you should also remove NAS4Free from your list--it's still using the same FreeBSD ZFS code.

I haven't used iSCSI to say one way or the other, but I haven't had any issues with NFS on my FreeNAS box.
 
If selfbuilding is an option, look at Storage pod Inspired options:
https://www.backblaze.com/blog/storage-pod/
http://openstoragepod.org/

We have our own storagepod inspired cases build by a contractor

there are commercial ones for sale at e.g.
http://www.45drives.com/products/




We use Self-build "Storagepods" with and without Backplanes and Cases made to spec by a contractor (case costs around 400€)

Mainboard with 10+ Sata 3 Ports for Full speed on SSD's
HBA's in JBOD mode or HBA with SFF-8087 connected to Backplanes bout as "replacement" from Supermicro - depending on the amount of Disks we use on server.
Remember you can use PCIE Riser and PCIE-Breakout Cables, so you can use lower numbered PCIE-Slotted Mainboards.


2x small SSD for OS
8x+ SSD's for Cache
20x-40x HDD's (we only use consumer grade €/TB is key)

Network:
Onboard dual 10G nics
or Intel X520-DA2, 2x 10GBase SFP+ nics (300€)
or Intel XL710-QDA2 bulk, 2x 40GBase QSFP+ (400€) with QSFP+ to 4x SFP+ breakout cables (200€) <-- we use those on newer servers.


Os:
We use Ceph with Replicated and Erasure Coded Pools - Backed by SSD-CacheingTier - no dedidcated journals.
Ontop of that we run multiple OpenMediaVault NAS with Virtual Raid-0 Disks (100GB each) - depending on the use case they are on the Replicated or the Erasure Coded pools.

We use some of these Pods for Backups only, some for VM's + Production-Storage

I do not see why you could not just run Nas4Free or Openmediavault on em. I also don't see why you could not take one of the multitude of 16-40-bay Cases available, stick 2-4 SSD + 8 TB drives in em. (2x OS, 4x SSD, 10x8TB =80TB Raw storage on a 16-Bay case)
 
Last edited:
As others already made comments, I too vouch for FreeNAS. It is what you call "Just Works". We have 2+ dozens FreeNAS and Gluster deployment running 24/7 for last several years mainly used as backup storage. No complaint thus far. For distributed back storage with redundancy Gluster is excellent choice in my experience. If you are looking for a single node solution and going to cram in as many HDDs a chassis will fit, then OmniOS+Napp-It is the best option. It takes a while to get used to it, but once configured it will make up with performance and reliability. All these options are in-house brewed and not ready made. We tend not to go with ready-made stuff purely because of cost point and flexibility. FreeNAS/Gluster/OmniOS-Napp-It just gives way more options/feature without paying enormous money to a vendor. I would not say this is for everybody. It works for us because we have the man power and skill to manage them on our own. The amount of money we save in the process is staggering. Hope this helps.
 
The IBM M1015 is pretty much a rebadged LSI 9211-8i; that same card has also been sold under Dell's name, though I don't recall the model designation off the top of my head. It, and all the 9211 variants, work very well under FreeNAS. They're also considerably less expensive on eBay. When using this card with FreeNAS, it should be flashed with IT-mode firmware, and (with any OS) the firmware version should match the driver version being used by the OS--with the current version of FreeNAS, that would be version P20.

My hardware probably isn't particularly relevant, since my use case is totally different, but I'm using a 12-bay Supermicro chassis with 6 x 4 TB, 3 x 3 TB, and 3 x 2 TB disks, in a single pool. I'm using an M1015 card to drive 8 of them, the remainder being on my motherboard's SATA ports (should have bought a chassis with a SAS expander backplane). I back up to it, and run media from it, but I don't run live VMs from it.

I'm not really trying to be a FreeNAS cheerleader as such. It may be good for your use case, or it may not (though I'm thinking it does sound like it could be a good match). I just didn't think the "not very good" was warranted as what sounded like a blanket statement.
 
Ok, Q-wulf, 45drives products look like fantastic! but for now we have small budget.... If we can renew our Dell with 8 new HDDs and an new card, it will be great for now. I am seeing that your general opinion and experience of freenas are good, so it can be an really good (and not expensive) option for us.
 
You've stated a requirement for 32 TB capacity, and ideally 48 TB. FreeNAS (or no doubt any number of other solutions) can certainly manage that, but you won't get there with 8 x 4 TB disks if you want any redundancy. With 12 x 4 TB disks in a single RAIDZ2 vdev (which is on the edge of what's recommended), you'd have 40 TB, or 36 TiB, of net storage capacity, less filesystem overhead and reserved space. If you're not interested in redundancy, of course, that changes the picture.
 
You've stated a requirement for 32 TB capacity, and ideally 48 TB. FreeNAS (or no doubt any number of other solutions) can certainly manage that, but you won't get there with 8 x 4 TB disks if you want any redundancy. With 12 x 4 TB disks in a single RAIDZ2 vdev (which is on the edge of what's recommended), you'd have 40 TB, or 36 TiB, of net storage capacity, less filesystem overhead and reserved space. If you're not interested in redundancy, of course, that changes the picture.

Well, for now we have 8*2T in RAID5, so 12T, and we have less tan 1T available. With 40T, or include 36, we will have 3 more space and Normally we had space to what we need in the next years... So You're true, probably I had to buy 5 or 6T disk and not 4T.
 
Last edited:
Ok, Q-wulf, 45drives products look like fantastic! but for now we have small budget....[...]
Backblaze has CAD-Files for their storage pods available on their Blog for public use.
Thats why we have em build locally. The prototype single-run cases (cutting/welding/assembly/coating) costs us roughly 400€ in materials and labour. (excluding components) - thats based on Backblaze's specs. I have seen people do Cases from Wallboard/PlyWood .. personally prefer Steel or Aluminum sheets not that expensive if you find a shop that can do it locally.





To give you an example of the economies of Scale that those pods can deliver:
The Cases we have build for us now have been heavily modified from the original "StoragePods". Its basically Storage-Towers, with Bottom to Top Airflow (think Desktop Tower with 90° inverted Mainboards, so Air Can flow through the AddinCards/Drives like a chimney).

1 sliding Drawer per Node. Each with 3x 16 90° inverted 3.5" Backplanes (4x4) with Hotswap access from outside (no drive Cages, just simple Plug&Play - so we keep cost low, replacement fast and Airflow unobstructed). 1x8 2.5" (2x4) 90° inverted internal Backplane (SSD's) + 2 internal mountings for OS-SSD's
We put 2 Sliding Drawers next to each other, stacked 5 High. (roughly 2.8 Meters of Hotswap HDD Wall.)
Makes 10 Nodes per "Tower". roughly 400 (4-8TB) HDD + 100 (256 GB) SSD in total. We normally keep 2 3.5" Slots per Backplane empty, for when we change Drives, makes it convenient. We also at least twice used em, when a redundant PSU blew and took a complete Node down. just pulled all OSD's from it and distributed em among the other nodes. Then yelled at the logistics dude cause he ran out of replacement PSU's and the new ones took 4 Days to get there.
For cooling we use "Sliding Fan-Drawers". Each Node has 2 Fan-Drawers with 3x140mm Fans per. Thats basically 12x140mm Fans cooling 10 Nodes. There is also 2 "Dust-Filter-Drawers" below them (just for kicks and to keep the Interns busy). The drives on the Top Drawers never exceed "ambient +7°C". Pretty efficient if you ask me :)
We use Dual 40G Nics, but 10G Switches, cause management is cheap - as in mr. Burns cheap and because they nicely breakout into 2x4x10G links that we can plug into quad redundant cheap-ass 10G-Switches)

At that point the cost for the "Tower" + Drawers +10x Node- Hardware (redundant PSU/Mainboard/CPU/Ram/Networking/Cooling/SSD's / 2 SAS Expanders) comes down to about 150€ per Drive - or 37€-19€ per TB. Or on other words:

60k€ for 1,6TB-3,2TB raw storage capacity (prices of HDD's not included - SSD's are)
We do as much as possible with consumer-Grade Hardware, especially SSD's. HDD's we nowadays use 4TB, 6TB, 8TB - Price/TB and Power-Consumption are the Deciding Factors only (we still have some derelict 2TB and 3TB Drives we are phasing out and also some non "storage-Tower" systems we are migrating out, cause they cost too much to keep running.).

We operate 5 of those Storage-Towers (only 1 is fully kitted with OSD's - the rest is at around 15% until next years IT-Budget)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!