Multi-node HA cluster with iSCSI only storage

newacecorp

Active Member
Oct 14, 2012
38
0
26
Hello,

What are my storage options for creating a 3-node Proxmox HA cluster using an Equallogic PS6000X as the storage backend? The PS6000X only supports iSCSI.

I tried to enable OCFS2 as well as GFS2 on the nodes but both Debian packages clash with PVE.

This leads me to ask what options are there out there for a Proxmox cluster when you only have access to iSCSI and need shared storage for the nodes?

The primary motivation for using the PS6000X is because of the redundancy (dual independent power supplies and dual fail-over controllers with 4 Ethernet ports per controller for multi-path) that is hard to get on a NAS which supports NFS.

Any feedback would be appreciated.

Regards,

Stephan.
 
As i know Proxmox also supports GlusterFS and Ceph or ZFS. They could be on top of the luns.

Do you already have this SAN? If not i would think about to buy a NetApp and using NFS.
 
Hello,

What are my storage options for creating a 3-node Proxmox HA cluster using an Equallogic PS6000X as the storage backend? The PS6000X only supports iSCSI.

I tried to enable OCFS2 as well as GFS2 on the nodes but both Debian packages clash with PVE.

Just follow http://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing
Also consider this one: http://pve.proxmox.com/wiki/ISCSI_Multipath

a cluster file system is not needed and adds just complexity.
OCFS2 is not available (oracle removed support), but GFS2. you just need to pick the right package:

> apt-get install gfs2-utils


This leads me to ask what options are there out there for a Proxmox cluster when you only have access to iSCSI and need shared storage for the nodes?

The primary motivation for using the PS6000X is because of the redundancy (dual independent power supplies and dual fail-over controllers with 4 Ethernet ports per controller for multi-path) that is hard to get on a NAS which supports NFS.

Any feedback would be appreciated.

Regards,

Stephan.
 


What this tells me is that Proxmox with iSCSI as the backend is not "quite there" as a replacement for VMWare ESXi. The methods described in your links all have limitations (can't take snapshots, offline migration only / copying of containers when migrating, etc.)

It's a bummer that Proxmox doesn't yet come with OCFS or GFS2 available "out-of-the-box". I'll have another look at NetApp, but their build-quality compared to Equallogic, is not on the same level, in my opinion.

Thanks all for the feedback.
 
Just follow http://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing
Also consider this one: http://pve.proxmox.com/wiki/ISCSI_Multipath

a cluster file system is not needed and adds just complexity.
OCFS2 is not available (oracle removed support), but GFS2. you just need to pick the right package:

> apt-get install gfs2-utils

Hello Tom,

Thanks for the feedback. Can you suggest a Wiki that outlines how a multi-node cluster can access iSCSI storage where I don't have any limitations (as with NFS)?

I want to be able to run both containers and KVM images and be able to do live snapshots and migrations.

Thanks in advance.
 
Wonderful. How about the details? What version of Proxmox are you running and how are the images stored?

I am running 3.4 and have a HP P2000 G3 SAN which presents a number of LUNS over 10Gbit iSCSI to the proxmox hosts. On the hosts I added the iSCSI target then created a multi path device for each LUN (since I have 4 paths to the SAN per host) and then created an LVM group on each. The storage is then available to be added as LVM through the GUI.

So, its iSCSI backed LVM and all my VM disks are stored on it. I am able to do an online migration of any VM from one host to another.

Nothing special really. I am about to add some NFS storage for keeping backups and ISO's.

I am also aware that since I am using iSCSI backed LVM I am only able to use 'RAW' VM disk images instead of QCOW2 and hence am unable to take snapshots. But I don't really care since I take daily backups anyway.

I should add that I have been a serious VMWare vSphere guy in the past and built/managed 100+ host vSphere environments with DR replication, HA, FT, vDS etc... In this instance I have migrated AWAY from a vSphere environment to Proxmox and for the cost comparison (free vs really expensive), Proxmox is doing an EXCELLENT job.

I also use Open vSwitch instead of the native linux networking model which gives great flexibility as far as VLAN's are concerned (no need to create an additional bridge per VLAN + can simply trunk more VLAN's to the host without a reconfiguration in order for VM's to use them, just add the tag in when you create a vNIC on a VM).

And Proxmox is only going to get closer and closer to being a drop in VMWare replacement. The VE 4.0 release is bringing IPv6 support in the GUI and also Watchdog based fencing.
 
Last edited:
I am running 3.4 and have a HP P2000 G3 SAN which presents a number of LUNS over 10Gbit iSCSI to the proxmox hosts. On the hosts I added the iSCSI target then created a multi path device for each LUN (since I have 4 paths to the SAN per host) and then created an LVM group on each. The storage is then available to be added as LVM through the GUI.

So, its iSCSI backed LVM and all my VM disks are stored on it. I am able to do an online migration of any VM from one host to another.

Nothing special really. I am about to add some NFS storage for keeping backups and ISO's.

I am also aware that since I am using iSCSI backed LVM I am only able to use 'RAW' VM disk images instead of QCOW2 and hence am unable to take snapshots. But I don't really care since I take daily backups anyway.

I should add that I have been a serious VMWare vSphere guy in the past and built/managed 100+ host vSphere environments with DR replication, HA, FT, vDS etc... In this instance I have migrated AWAY from a vSphere environment to Proxmox and for the cost comparison (free vs really expensive), Proxmox is doing an EXCELLENT job.

I also use Open vSwitch instead of the native linux networking model which gives great flexibility as far as VLAN's are concerned (no need to create an additional bridge per VLAN + can simply trunk more VLAN's to the host without a reconfiguration in order for VM's to use them, just add the tag in when you create a vNIC on a VM).

And Proxmox is only going to get closer and closer to being a drop in VMWare replacement. The VE 4.0 release is bringing IPv6 support in the GUI and also Watchdog based fencing.

Thanks for the feedback. This is a crippled solution. I need thin provisioning, linked clones and snapshots and the ability to store ISOs and backups.

I'm going to have to look for NFS storage from a NetApp solution instead of Equallogic's PS series solution.
 
You could also consider Nexenta. Nexenta means ZFS and this enables both NFS and ZFS over iSCSI. ZFS over iSCSI gives you thin provisioning, linked clones and snapshots and all this supports both RAW and Qcow2. The NFS access fulfills your requirements for storing ISO's and backups. On the long run and provided you have the needed economy Nexanta can be upgraded to HA storage since Nexenta has developed a ZFS cluster solution for 2 storage servers or more.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!