Shared storage suggestion for a 5 node cluster?

locusofself

Member
Mar 29, 2016
39
3
8
40
Hi folks,

Any recommendations for a shared storage unit for a 5 node Proxmox cluster? Mostly Linux machines and pretty low traffic - mostly doing email for small groups and some VoIP. No "big data".

Is NFS fast enough? These are blade servers in a dell m1000e enclosure.
 
NFS is fine, although insecure. Its the quick and easy way. Can you describe your setup a little more? I am working on VoIP hosting as well.
 
Security is important. Pehaps iSCSI is the better solution? I would like to also connect some non-blade 1U servers I have to the shared storage .. so although my blade enclosure probably has some unified storage connection, I think something networked may be better for me.

Lets say I have a budget of $3,000 for a shared storage solution .. any contenders in that range? Say I need to scale up to 100 Linux VMS . Some FreePBX/Asterisk, some OpenVPN servers, and some email servers. Nothing too high traffic and nothing too disk intensive.

Speed (latency) and redundancy/integrity of data are most important to me- can't afford to lose custom configurations for my customers VoIP systems etc.
 
A SuperMicro board and a number of disks in a ZFS raidz2 running Omnios using iSCSI over ZFS plugin in proxmox could give you both performance and security.
 
  • Like
Reactions: ebiss
What about a used real storage system? Maybe 4 Gbit FC plus dual-controller SAN?

A cheap solution would also be two nodes with drbd and exporting multipath iscsi. This should get the redundancy needed.
 
A SuperMicro board and a number of disks in a ZFS raidz2 running Omnios using iSCSI over ZFS plugin in proxmox could give you both performance and security.

we are looking for an iscsi solution and saw https://forum.proxmox.com/threads/omnios-zfs-and-iscsi.12925/ and this thread.
I want to separate our storage off of pve .

a couple of questions:

are you still using napp-it.org omnios ?

would Intel 10 Gigabit Ethernet be good for iscsi network? or is infiniband better to use? we have both spare.
 
are you still using napp-it.org omnios ?
Yes. Has been working flawlessly for years.
would Intel 10 Gigabit Ethernet be good for iscsi network? or is infiniband better to use? we have both spare.
10 Gbit is also fine. The reason I use infiniband was the price tag for 10 Gbit 3 years ago.
Note that you cannot use Intel Skylake or Xeon D since both come with Intel X550 which is currently not supported by Illumos but all 10 Gbit < X550 is fully supported.
 
I'm interested in your setup.

I assume you use zfs send/receive to back up - if so which package/software did you use ?

We do not have a huge storage requirement. so am going to use intell ssd for the zpool. the chassis is a 24 drive supermicro so I'll add drives as needed.

we have ten 480GB drives. debating whether to use one large raidz2 pool , two 5 drive raidz1 or something else. What do you suggest?

as we'll be using all ssd , I plan to skip using log & cache. or is that a mistake?
 
For backup I simply use the provided backup solution which comes with proxmox.

I would suggest 2 pools:
1. Raid10 or striped mirror pool (2x2 disks should be sufficient) for I/O intensive servers like database servers.
2. raidz2 pool (the remaining 6 disks) for the rest

If your pool is entirely build by SSD there is nothing gained using cache or log.
 
  • Like
Reactions: ebiss
Thank you for the advice.

Also - by backup I meant backing up napp-it . I was thinking of using a spare system with spinning disks to backup up zfs - using send/receive .
 
Hi,

yes you can use it with LVM on iSCSI not ThinLVM
 
Security is important. Pehaps iSCSI is the better solution? I would like to also connect some non-blade 1U servers I have to the shared storage .. so although my blade enclosure probably has some unified storage connection, I think something networked may be better for me.

Lets say I have a budget of $3,000 for a shared storage solution .. any contenders in that range? Say I need to scale up to 100 Linux VMS . Some FreePBX/Asterisk, some OpenVPN servers, and some email servers. Nothing too high traffic and nothing too disk intensive.

Speed (latency) and redundancy/integrity of data are most important to me- can't afford to lose custom configurations for my customers VoIP systems etc.

I think $3k is a good start for a single SAN server. Another consideration is the network interfaces on your proxmox nodes and the networking equipment they are connected to. For example we had a lot of issues with QoS. Should SAN traffic be favored over SIP or RTP? What happens if you need to boot a VM while 80 other PBX VMs are handling live calls?

What kind of network interfaces do your Proxmox Nodes have? What is your networking vendor and backbone topography? You need to consider scalability and redundancy, as well as geo-redundancy. If you are willing to share more information about your setup, I can tell you how my firm solved our storage problem that costs MUCH less than $6k (assuming a redundant NAS setup @3k each) + upkeep overhead.

So for 'speed' the real metric is your network throughput, QoS, and your Proxmox node's NIC bandwidth.

for redundancy, there are open-source options like Ceph and zfs, and pNFS (I am assuming you intend to have multiple physical storage arrays for redundancy.)

As for integrity I don't know about any open-source linux tools that can accomplish this automatically. One issue with proxmox and using a shared-storage environment is the possibility of a split-brain condition occurring. This happens when corosync detects a node failure for whatever reason, and initializes the HA VMs on a quorate node, but the "failed" node has those VMs still running and reading/writing data to the storage system. Now you have two identical virtual identities reading and writing to the same file location, eventually destroying it. Proxmox does have some fencing support, for example with IPMI fencing. However if the IPMI interface becomes unresponsive on the failed node, Pacemaker uses an outdated OpenClusterFramework script that will hang forever waiting for IPMI response, effectively making your HA cluster non-HA. So decide if you want to riske Split-Brain for a single VM or have an entire proxmox and however many VMs running on it down until you get angry calls from 120 customers all at once.
 
do you use proxmox ? if so as a cluster?

Could you share details on the storage system ? I'm in the process of moving storage off of proxmox and am researching to test different systems.

I like zfs , after 7-8 years for me it is really hard to destroy.
 
Not necessarily but it will greatly improve administration.

Here I may have something set wrong as it looks like it is required.

I have a iscsi target added to storage type 'Zfs over iscsi' , a kvm is installed here.
Code:
zfs: iscsi-pro4
  target iqn.2010-09.org.napp-it:1459764979
  pool data
  blocksize 8k
  portal 172.30.24.15
  iscsiprovider comstar
  content images
  nowritecache


then i added iscsi storage, to be used for lvm. I just choose storage type 'iscsi' . not sure if that was the correct choice for lvm
Code:
iscsi: iscsi-2
  target iqn.2010-09.org.napp-it:1459764979
  portal 172.30.24.15
  content none

the issue is when I try to create a lvm using 'iscsi-2' ,
the only LUN presented at 'Base Volume:' is the one already used by a KVM.


Am I doing something incorrect? It seems so . per https://pve.proxmox.com/wiki/Storage_Model#Use_iSCSI_LUN_directly it looks like
I am running in to this:
'This means if you use a iSCSI LUN directly it still shows up as available and if you use the same LUN a second time you will loose all data on the LUN.' .
 
What you want to achieve requires the storage plugin iscsi and not zfs over iscsi. zfs over iscsi is for zvols provided as hole disks for VM's only. You cannot share a LUN between different storage plugins so in your case you will have to create a LUN on omnios manually and make that available to use for the iscsi plugin.
 
What you want to achieve requires the storage plugin iscsi and not zfs over iscsi. zfs over iscsi is for zvols provided as hole disks for VM's only. You cannot share a LUN between different storage plugins so in your case you will have to create a LUN on omnios manually and make that available to use for the iscsi plugin.

so for KVM and lvm-for-lxc I should use just storage plugin iscsi ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!