Recommended storage system???

Feb 10, 2016
23
0
21
51
I am about to configure a NAS that’s suppose to supply the storage for 3 virtualization servers. And I need some advise regarding which base-system to chose. The NAS has good processors, sufficiently RAM, good disks and 10G connections between the servers… So I am mostly looking for general recommendations.

I was either thinking about a system primarily based on Ceph (RBD) or a system primarily based on some iSCSI variant. And I am leaning toward the Ceph (RBD) based system for 2 reasons: The development is more active and the full snapshots support. On the other hand iSCSI is a very well-tested technology (absolutely no gambles) and with ZFS over iSCSI I would also have full snapshots support.

The 3 virtualization servers will of course run Proxmox VE…

Do any of you have experiences/recommendations related to the differences between Ceph (RBD) and iSCSI?

Best regards
Jacob Tranholm
 
I could create a Ceph cluster using 2 NAS' and also most of the local storage space in the 3 virtualization servers. All 3 virtualization servers have 4 HD's but all of the VMs are going to use network-storage. So this limited local diskspace (4 x 300GB enterprise disks) is only for the OS, and the remaining space could be used in a Ceph cluster. Both NAS-servers have good hardware... I also have 2 older NAS-servers with much less impressive hardware, I could also use these but was planing on retiring them.
 
Last edited:
If you want any decent performance from Ceph you need either Xeon or Opteron processors.

Both NAS have 2 x 6 core Xeon and all 3 virtualization servers have 2 x 8 core Xeon. But I will give the old NAS' their deserved retirement...

The first NAS was intended as storage for the VMs and the second one was only intended for backups of the VMs. But since Ceph has builtin disaster recovery I could include the second NAS in Ceph and just keep backups at our offsite backup storage.

But I have to admit that I am at the moment leaning towards using the classical iSCSI configuration for the first NAS and primarily use the secondary NAS for VM-backups. In short: Stick to the safe well-known structures... And here I will probably use ZFS over iSCSI to get the snapshot functionality (primarily for backups).
 
Powerful NAS indeed;-)
But apart from powerful CPU Ceph also requires a lot for disks, at least 6-8 per node.
What kind of OS are you planning to use on the NAS for use with ZFS over iSCSI?
 
Powerful NAS indeed;-)
But apart from powerful CPU Ceph also requires a lot for disks, at least 6-8 per node.
What kind of OS are you planning to use on the NAS for use with ZFS over iSCSI?

The primary NAS have 12 x 3TB 3.5" SAS and the secondary for backup contains at the moment just 8 empty 3.5" caddies. I was only planing on using the secondary one for backups, and may still chose cheaper SATA disks. That depends on if I am going to use it for Ceph or just for backups. Regarding RAM the secondary NAS only have 48GB where the primary has 96GB.

Regarding the OS I just want to use the best UNIX-based system for the task. I usually manage my servers from command line and I can easily live with most Linux or BSD systems. If I were to install Ceph I would probably select some CentOS variant (mostly because RedHat has fingers in the development of Ceph). But I usually prefer Debian-based systems so for iSCSI I would probably chose some thing like OpenMediaVault (the jessie based version). I read somewhere that FreeNAS have some problems with iSCSI but in the BSD-based structures I could also use NAS4Free.
 
OpenMediaVault (the jessie based version). I read somewhere that FreeNAS have some problems with iSCSI but in the BSD-based structures I could also use NAS4Free
The only viable option currently of enterprise grade is to choose a Solaris based OS for your NAS. OpenMediaVault uses Iet as target which is unmaintained code that is removed from Debian as of Stretch in favor of LIO (support for LIO is not in proxmox yet but I am working on it). FreeNAS and NAS4Free uses ctl which is not available in Proxmox but for both FreeNAS and NAS4Free the only proper why of supporting ZFS over iSCSI would be to use their respective API since this is the only way of synchronizing iSCSI state between OS and GUI. For a Solaris based solution I can recommend Omnios and Napp-it as a web based GUI for Omnios.
 
The only viable option currently of enterprise grade is to choose a Solaris based OS for your NAS. OpenMediaVault uses Iet as target which is unmaintained code that is removed from Debian as of Stretch in favor of LIO (support for LIO is not in proxmox yet but I am working on it). FreeNAS and NAS4Free uses ctl which is not available in Proxmox but for both FreeNAS and NAS4Free the only proper why of supporting ZFS over iSCSI would be to use their respective API since this is the only way of synchronizing iSCSI state between OS and GUI. For a Solaris based solution I can recommend Omnios and Napp-it as a web based GUI for Omnios.

OK... I will follow your recommendation and try the Omnios and Napp-it combo (and iSCSI for the server). I actually haven't played around with Solaris OS since about 20 years ago at university where Solaris OS was the only real possibility. But my preferences for Linux and the BSD systems are traceable back to this period of my life, and I will be looking forward to re-establishing the connection to Solaris OS. I am however afraid the label will be wrong on the server: My experiences with Solaris OS are all connected to SUN SPARC machines... But I will look forward to expanding my universe and re-associate Solaris OS with other hardware-systems.

Thanks for constructive advise...
 
OK... I will follow your recommendation and try the Omnios and Napp-it combo (and iSCSI for the server). I actually haven't played around with Solaris OS since about 20 years ago at university where Solaris OS was the only real possibility. But my preferences for Linux and the BSD systems are traceable back to this period of my life, and I will be looking forward to re-establishing the connection to Solaris OS. I am however afraid the label will be wrong on the server: My experiences with Solaris OS are all connected to SUN SPARC machines... But I will look forward to expanding my universe and re-associate Solaris OS with other hardware-systems.
Thanks for constructive advise...
You find out that the step is not so steep anymore since current Omnios uses GNU tools is userland and the ZFS commands are the same. Only thing really different is Solaris (actually OpenSolaris) specific tools for network config, interaction with the kernel and using SMF instead of sysv or systemd. Add to this that you get DTrace, kernel in-tree ZFS, and Zones (Current stable has LXZones manageable from napp-it and the next Omnios LTS (medio 2017) will ship with LXZones. LXZones means running Linux inside a zone. Can be loaded as a zfs stream binary available from SmartOS or tar zipped images for LXC). Solaris also supports KVM provided you have an Intel CPU with support for virtualization which is the fact in your case. So all in all: If you have supported hardware running OpenSolaris is not that much different than running a current Linux distro.
 
The primary NAS have 12 x 3TB 3.5" SAS and the secondary for backup contains at the moment just 8 empty 3.5" caddies. I was only planing on using the secondary one for backups, and may still chose cheaper SATA disks. That depends on if I am going to use it for Ceph or just for backups. Regarding RAM the secondary NAS only have 48GB where the primary has 96GB.

Regarding the OS I just want to use the best UNIX-based system for the task. I usually manage my servers from command line and I can easily live with most Linux or BSD systems. If I were to install Ceph I would probably select some CentOS variant (mostly because RedHat has fingers in the development of Ceph). But I usually prefer Debian-based systems so for iSCSI I would probably chose some thing like OpenMediaVault (the jessie based version). I read somewhere that FreeNAS have some problems with iSCSI but in the BSD-based structures I could also use NAS4Free.
Hi,
you should take a look on openattic - it's much more professional like an openmediavault.
For ceph: for good performance you need enough nodes (like 8) and enough OSDs (and good journal-SSDs for fast writes).

Udo
 
Hi,
you should take a look on openattic - it's much more professional like an openmediavault.
For ceph: for good performance you need enough nodes (like 8) and enough OSDs (and good journal-SSDs for fast writes).

Udo
But OpenAttic uses LIO so no support for ZFS over iSCSI in proxmox currently.
 
This is just a question: Will a Linux with the SCST core (http://scst.sourceforge.net/) give a working iSCSI with support for ZFS over iSCSI?
No, and I see no reason why proxmox ever should.
Two reasons for this:
1) LIO is chosen by all the major linux distros for iscsi-target which SCST is not. Eg. SCST is a second class citizen.
2) LIO is maintain inside the linux kernel which SCST is not. Eg. SCST is a second class citizen.
 
No, and I see no reason why proxmox ever should.
Two reasons for this:
1) LIO is chosen by all the major linux distros for iscsi-target which SCST is not. Eg. SCST is a second class citizen.
2) LIO is maintain inside the linux kernel which SCST is not. Eg. SCST is a second class citizen.

So the only Linux iSCSI versions fully supporting ZFS over iSCSI for Proxmox uses earlier kernels than 2.6.38? If so Debian Squeeze was the last one with iSCSI supporting ZFS over iSCSI for Proxmox.

I have to admit that I have difficulties understanding why Proxmox haven't put all resources into developing full support for LIO. And in this situation I will not blame the people believing in SCST. - This makes any other storage-solutions more attractive... I believe in freedom of choice (also related to the OS for iSCSI servers). And quite frankly: I also believe in the rights of second class citizens.

I will however still follow your advice and install the Omnios and Napp-it combo. But I hate not having a second choise...
 
So the only Linux iSCSI versions fully supporting ZFS over iSCSI for Proxmox uses earlier kernels than 2.6.38? If so Debian Squeeze was the last one with iSCSI supporting ZFS over iSCSI for Proxmox.
This is not true. Current Debian Stable uses Iet which is supported by proxmox. Debian Stretch (next stable) has removed Iet in favor of LIO.
 
This is not true either. OpenMediaVault (OMV) based on Debian Jessie can provided Iet iscsi-targets for proxmox and have excellent support for ZFS (ZOL Zfs On Linux). Disclaimer: I am the coauthor of the ZFS support in OMV:cool:

Thanks... I just needed to understand which iSCSI systems support Proxmox.

I have to admit that the 3 Proxmox-servers I am managing at the moment all uses local ZFS for storage so I am not accustomed to having these problems with iSCSI. But I am in the process of retiring 2 of these servers and move the VMs to servers using iSCSI for storage. I received a necessary extra 8 x 10G module for the modular switch yesterday and expect to mount the new servers on Saturday. So this is my preliminary preparation understanding the problems related to iSCSI without actually having the systems myself.

My conclusion about LIO since 2.6.38 came from here: http://scst.sourceforge.net/comparison.html. And I didn't know the difference in whether the distributions used IET or LIO.

And thanks for helping me understand which systems will work with Proxmox. Here I needed some experience from others...
 
You find out that the step is not so steep anymore since current Omnios uses GNU tools is userland and the ZFS commands are the same. Only thing really different is Solaris (actually OpenSolaris) specific tools for network config, interaction with the kernel and using SMF instead of sysv or systemd. Add to this that you get DTrace, kernel in-tree ZFS, and Zones (Current stable has LXZones manageable from napp-it and the next Omnios LTS (medio 2017) will ship with LXZones. LXZones means running Linux inside a zone. Can be loaded as a zfs stream binary available from SmartOS or tar zipped images for LXC). Solaris also supports KVM provided you have an Intel CPU with support for virtualization which is the fact in your case. So all in all: If you have supported hardware running OpenSolaris is not that much different than running a current Linux distro.

Just one final question: I've been having problems getting both the 10G fibre and the RAID-controller working using Omnios. And all of the hardware works out-of-the-box using Debian Jessie. How about if I install Debian Jessie and use ZFS (the zfsonlinux repository) as root-system, and on top of that installs napp-it. Will that work??? - I have been using ZFS as root-system for Debian in a some years and this is stable. So the only problem is whether the Linux ZFS supports using a ZFS Volume as an iSCSI LUN.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!