Which filesystem for cluster (with HA)

emberlin

New Member
Jul 14, 2015
10
0
1
Hi,

I search the forum but only found old posts, so I wonder if there are new ideas.

I'm setting up a cluster of 5 proxmox nodes (only KVMs). We have 1 storage node running freeNAS. Storage is connected via 2x10Gbit ports (MPIO) to the storage vlan.
Each proxmox is connected via 2x1Gbit to the storage vlan.

The idea is to export (via iscsi) the storage for the VMs.

A couple of years ago I had a 3 nodes proxmox cluster with cLVM (cluster lvm). This seems now to be a bit outdated, so I wonder if there's a better solution.

What should I use to be able to have live migrations?

Thanks!
em
 
I you want to use iSCSI for Proxmox you shouldn't use FreeNAS since the iSCSI implementation in current FreeNAS is not optimal for Proxmox. I would recommend using Omnios and for web GUI use napp-it. With Omnios you can use the ZFS storage plugin given you both online migration, snapshots, and clones. If you unfamiliar with Solaris (Omnios is based on Illumos which is a fork of OpenIndiana) you can try OpenMediavault (Debian based) which also provides a GUI to ZFS (plugin created by me and another OpenMediavault user). Read more here: https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI
 
Hi Mir,

Thanks for the info!
I would like to try Omnios but my Solaris knowledge is not that good and it may end up in a mess.
Should I expect good performance from a debian based ZFS (OMV). I read (about a year ago) some bad reviews about zfs on linux, but not sure what's the status now.

em
 
But for ceph, I thought you need a monitor node + OSD node. We have a bunch of disks but just one available storage node.

That storage will also export iscsi for our oracle db, so I'm not sure ceph will work here.
 
I tried some benchmarks and it showed me difference in just a few percent between zol and openindianna. If you are ok with linux cli, don`t be afraid of solaris. It is very similar and with nappit there is a big probability you would not need anything else.
Anyway I left this solution in favor of ceph and glusterfs on zfs. If the storage is only for vm I would prefer ceph as it seems more stable to me (it is just a feeling as both technologies survived complete blackout of my whole datacenter recently). You can have monitor and osd on the same server, I have not stumbled upon any problems yet.
 
I would like to try Omnios but my Solaris knowledge is not that good and it may end up in a mess.
Since Omnios userland is based on GNU there is practically no difference between Omnios and Debian. Of course there are some Solaris specific commands which is not GNU but these are rarely never used. If you use napp-it I think you can do without using them.

Should I expect good performance from a debian based ZFS (OMV). I read (about a year ago) some bad reviews about zfs on linux, but not sure what's the status now.
There are hardly any performance difference between ZFS on Omnios and Linux - maybe difference first shows in large installations. The part where you can expect to see a big performance difference is the iSCSI implementation. The iSCSI implementation on Omnios is Comstar which is fare superior to the iSCSI implementation in Linux and FreeBSD and is used for iSCSI (FC, Infiniband, and Ethernet) in all enterprise Solaris installations world wide.
 
Anyway I left this solution in favor of ceph and glusterfs on zfs. If the storage is only for vm I would prefer ceph as it seems more stable to me (it is just a feeling as both technologies survived complete blackout of my whole datacenter recently). You can have monitor and osd on the same server, I have not stumbled upon any problems yet.
Would you still recommend Ceph over ZFS if you only have one storage server??
 
Good point. I would recommend to distribute existing drives across the rest of the servers and use ceph just because of HA. If HA is not so important I would use omnios + nappit.
 
So, I ended up installing Omnios+napp-it and it's working very nice.

I'm a bit confused about using (on the proxmox side) iscsi or zfs over iscsi (or even lvm) as storage.

This is what I've done:
- on Omnios: created a zfs pool called vm
- created a thin prov LU 'vm/images'
- created target, target group, view, etc.. the target is using 'vm/images'
- added the iscsi target to proxmox.

Is this the correct way to do so? Proxmox's ZFS over iscsi seems to be the way to do it.. but I'm not sure if I'm doing it right.
The idea is to have two targets in MPIO. Will zfs-over-iscsi also work in that case?

Thanks!
em
 
Just to clarify.. I thought about MPIO because I read that LACP with iscsi is a 'NO'.

We're planning on storing our oracle db on the iscsi, as well as the vms. so I'm looking for the best performance.

Maybe it makes more sense to have bonding with iscsi for failover, and use zfs-over-iscsi
 
Where have you read that? I have been using LACP and iSCSI for ages without problems.

While doing some research I found many blogs advising not to use LACP with iscsi, comparing the performance. But I guess it depends on each scenario.

I'll try it with both MPIO and LACP and see which one works better for us.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!