Ceph vs zfs (openindiana+napp-it) Which one is better?

kumarullal

Renowned Member
Jun 17, 2009
184
0
81
LA, USA
Hi All,
From what I have read on this forum, user mir is an expert on zfs and Wasim (Symcon) is an expert on ceph.
According to mir, ZFS is faster than ceph, where as ceph provides clustering option and ZFS does not (Sure, ZFS clustering option can be procured but is costly).
What would be an ideal solution out of the two technologies, if storage clustering is not needed in a small to medium proxmox cluster.
The main goal is to have a storage solution that supports multiple drive redundancy/failure and hot-swap option.
I know both solutions support this and both solution does not require hardware raid setup and can be installed on commodity hardware.
ZFS can be used to create a software raid (raid-z for example) and ceph provides drive redundancy without any raid setup.
The final decision should based on the following:
  1. Once setup, should run flawlessly.
  2. During drive failure, should be quick and easy to fix.
  3. The performance should be as good as (if not better) than ISCSI LVM storage. (A plus is with NFS, or RDB is that openvz templates and containers+ KVM Images + ISO and backups and qcow2 file fromat can also be part of the storage infrastructure)
  4. Should be able to expand the existing storage by simply adding more disks.
  5. Reduing complexity and enhancing ease of setup and operation.
What solution out of the two should be ideally suited to achieve these goals.
Thanks in advance.
 
Last edited:
Well you don't need clustering storage, so that makes it quite easy. I would say ZFS. I am quite confident that ZFS has better integrity checks than ceph does. Really comparing apples to oranges imo.
 
I have read somewhere in this forum that if ZFS reaches around 80% of it's capacity, the performance drops 10 x times slower due to fragmentation. Is this true?
 
I have read somewhere in this forum that if ZFS reaches around 80% of it's capacity, the performance drops 10 x times slower due to fragmentation. Is this true?

This is true, its the nature of copy on write. I currently have 5 32 disk zfs raidz sets which are right around 50% used. They are still humming along with no issues.
 
don't use openindiana because development has stalled. Choose Omnios instead as this is actively maintained by a big time storage provider. Omnios also has the advantage of being specifically develop to support large scale storage solutions and omnios has a formulated development path with long time support releases, stable release and unstable releases like you see in Debian and Ubuntu. Read here: http://omnios.omniti.com/wiki.php/ReleaseCycle.

Back to your questions: For your requirements ZFS will be the optimal solution and as a surplus ZFS provides native NFS so you will be able to support HA containers running inside either raw or qcow2 - qcow2 should be preferred due to capabilities like snapshot and clones.

Specific answers:
1) Check
2) Check
3) Check
4) Check
5) Check. When you get accustomed to ZFS.
 
I have read somewhere in this forum that if ZFS reaches around 80% of it's capacity, the performance drops 10 x times slower due to fragmentation. Is this true?
The simple math. When storage reaches around 80% used capacity you simply add more disks which will lower disk usage and you are all set.
 
That makes perfect sense.
Thanks for your answers.
Just as a side note. I was wondering out of curiosity, if this could be another option.
openfiler supports XFS (NFS) shares. It also support snapshots. FInally it also supports clustering out of the box.
I know the developement has stalled. and there is no support anymore. However, I have been using openfiler (ISCSI) with vmware virtual center for about 5 years now without issues.
I have not tried it with proxmox though.
But apart form the fact there is no support, openfiler still seems to be very stable once installed and setup.
Any thoughts on this?
 
That makes perfect sense.
Thanks for your answers.
Just as a side note. I was wondering out of curiosity, if this could be another option.
openfiler supports XFS (NFS) shares. It also support snapshots. FInally it also supports clustering out of the box.
I know the developement has stalled. and there is no support anymore. However, I have been using openfiler (ISCSI) with vmware virtual center for about 5 years now without issues.
I have not tried it with proxmox though.
But apart form the fact there is no support, openfiler still seems to be very stable once installed and setup.
Any thoughts on this?
Hi,
you can also take an look at openAttic - this is under heavy development and support also a lot of different storage things (zfs, btrfs, drbd...).

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!