I need some advice about SAN/LUN Fibre channel configuration

rahman

Renowned Member
Nov 1, 2010
63
0
71
Hi,

Our new servers (2 x Fujitsu rx300, 2x6 core xenon and 72 gb ram with 2x146gb SAS used as raid 1) and SAN(fujitsu dx90 12 disk slots with 6x3,5" 450 GB SAS and 6x2TB Nearline SAS). Dx90 has Fibre Channel port to connect direcly to the servers. No FC Switch. And no ethernet ports for iscsi support.

I will use both of them for virtualization.

I just want to hear you advices about how to configure SAN storage. This is the first time I am dealing with a SAN I am planning to create two raid 10 configuration, one for SAS, one for NL-SAS. But should I create one Logical volume/LUN per raid group? or two or more? Can a LUN be used by two servers? Or only one server for one LUN?

I will be glad if you can point me to the right direction.

Note: There will be Oracle DB witch will not have that much IO load mostly read operations. A few Windows Server VM's and lots of Linux VMs web server, mail server (250 mail users), radius, ldap etc.

Note: And I want to ask about BBU battery; how long it takes to fully charge it? The server is running for about 3 days and it is still not fully charged thus write back mode is disabled. Should the battery faulty or its normal for first time charge?

Thanks.
 
You will be using openVZ for linux VM's and KVM for the windows VM's.

OpenVZ

* You will need 2 LUN's for openVZ as it cannot share a filesystem - so 1 lun for your 1st proxmox server and 1 lun for your 2nd proxmox server.
* You will need to follow some of the steps in this wiki http://pve.proxmox.com/wiki/OpenVZ_on_ISCSI_howto as FC will be presented as local storage so instead of setting up an iSCSI target you will be adding a local storage location (don't setup an LVM for openVZ just add the local storage only).
* openVZ can only be used on local storage - or devices mounted as a local filesystem so the point here is that /var/lib/vz folder needs to be mounted as an ext3 (or whatever filesystem you are going with), on the 1st LUN of your storage for the 1st server and 2nd LUN of your storage on the 2nd server.

KVM

* Not sure with FC attached storage - but i assume it would be adding a local directory - with the shared box checked - and then create an LVM group using that local disk you just added as the base storage again with the shared option checked. Once your proxmox servers are in a cluster that should then be locally available to both servers.
* Create a virtual machine and test live migration works as it should

As far as your LUN break up is concerned I would put the linux vm's on your fast disks and as I said use openVZ containers, and have 1 LUN per server on whatever raid group is fastest

For the other raid use that for KVM and have say x number of LUNs (all are going to be shared between both servers) - the x number of LUNs will have to be worked out for what works best for the number of disks you have (6), as there will be a sweet spot where you will see significant performance benefits. try 1 massive LUN and 3 windows VM's and get some performance read/write tests done. Then try 3 LUNS and 1 VM per lun and get some tests done again.

You might also want to consider making all the 12 disks the same - however this might not be an option for you now as you already have the hardware. But recommendations out there by some real geeks who love this stuff say never to mix disks in a SAN and to get a sweet spot it would be to use all 12 disks in a single raid 10, and then split that into 3 or 4 luns - which in the above recommendations for openVZ and KVM would work out well - 2 for openVZ, and 2 for KVM's (or 1 for KVM's and the 4th for snapshots or vzdump's or qemu dumps) ...

Cheers
 
Thanks for your reply.

You will be using openVZ for linux VM's and KVM for the windows VM's.

Well I am planing to use KVM for all VMs, for both Windows and Linux VMs. Should I not?

KVM

* Not sure with FC attached storage - but i assume it would be adding a local directory - with the shared box checked - and then create an LVM group using that local disk you just added as the base storage again with the shared option checked. Once your proxmox servers are in a cluster that should then be locally available to both servers.
* Create a virtual machine and test live migration works as it should

As far as your LUN break up is concerned I would put the linux vm's on your fast disks and as I said use openVZ containers, and have 1 LUN per server on whatever raid group is fastest

For the other raid use that for KVM and have say x number of LUNs (all are going to be shared between both servers) - the x number of LUNs will have to be worked out for what works best for the number of disks you have (6), as there will be a sweet spot where you will see significant performance benefits. try 1 massive LUN and 3 windows VM's and get some performance read/write tests done. Then try 3 LUNS and 1 VM per lun and get some tests done again.

You might also want to consider making all the 12 disks the same - however this might not be an option for you now as you already have the hardware. But recommendations out there by some real geeks who love this stuff say never to mix disks in a SAN and to get a sweet spot it would be to use all 12 disks in a single raid 10, and then split that into 3 or 4 luns - which in the above recommendations for openVZ and KVM would work out well - 2 for openVZ, and 2 for KVM's (or 1 for KVM's and the 4th for snapshots or vzdump's or qemu dumps) ...

Cheers

Well, I thought I should create some LUN scheme like this; make raid 10 for faster 6xSAS disk and create 2 LUNs on this raid configuration. Also create raid 10 for slower but larger 6xNL-SAS disks and create 2 LUNs on this too. All servers will use 1 faster and one larger LUNs. Thus I can create VMs that need more speed on faster LUN and need more storage on larger LUN. As I said this is the first time I deal with SAN storage and maybe I am missing something. Is this scheme seems OK?

Also I want to ask something else; how many LUNs can I map to one FC-PORT? Can I map multiple LUNs to a FC-PORT or I need one FC Port per LUN? I am asking because I created and mapped a LUN to the port0 of DX90. Then I tried to create another LUN and map it to port0. It didn't worked; it complained the WWN already is in use.
 
Thanks for your reply.



Well I am planing to use KVM for all VMs, for both Windows and Linux VMs. Should I not?
I use KVM both for Linux and windows. But people say, and this is surely true, that an OpenVZ VM consumes less ressources than a KVM VM. So you can add more on the same server (in my opinion, it could be dangerous, if the host fails...). OpenVZ can only "virtualize" linux machine, as it is a container solution, and all machines share the linux kernel of the host. So you are restrained to a few kernels supported by OpenVZ (2.6.18, 2.6.24, 2.6.32, which are patched), and you have to recompile the host kernel, if you need additional module...

If you have enough resource with your two servers, you can use KVM.

KVM, on the other side is a full virtualization solution, so you can install any kernel/OS you want (linux, windows, BSD...), and it is included in mainline linux kernel since 2.6.20, so you can use for example the latest (or a recent one provided by Proxmox, 2.6.35), on the host. Every recent distribution then support KVM (Red Hat, Ubuntu, Debian...).



Well, I thought I should create some LUN scheme like this; make raid 10 for faster 6xSAS disk and create 2 LUNs on this raid configuration. Also create raid 10 for slower but larger 6xNL-SAS disks and create 2 LUNs on this too. All servers will use 1 faster and one larger LUNs. Thus I can create VMs that need more speed on faster LUN and need more storage on larger LUN. As I said this is the first time I deal with SAN storage and maybe I am missing something. Is this scheme seems OK?

Also I want to ask something else; how many LUNs can I map to one FC-PORT? Can I map multiple LUNs to a FC-PORT or I need one FC Port per LUN? I am asking because I created and mapped a LUN to the port0 of DX90. Then I tried to create another LUN and map it to port0. It didn't worked; it complained the WWN already is in use.
See the WiKi, that can be helpfull, 'Storage Model' :
http://pve.proxmox.com/wiki/Storage_Model

It is recommended with iSCSI to add the storage as an iSCSI target without using LUN, and then add LVM group on this target, using the web interface. The advantage is that you can use LVM snapshots to backup your VMs without stopping them (but for a database, it is possible you would backup a database in an inconsistent state...), and if the storage is shared, you can live migrate your VMs between servers.
But without a switch (or best two with multipath) to share the storage, I don't think the last option will work...

Good Luck !

Alain
 
Last edited:
I have to add that I don't know if the same scheme apply for Fiber Channel as for iSCSI. I don't think it is possible to manage Fiber Channel storage through Promox web interface...

Alain
 
I have to add that I don't know if the same scheme apply for Fiber Channel as for iSCSI. I don't think it is possible to manage Fiber Channel storage through Promox web interface...

Alain

Thanks alain.

So I will stick with KVM for all VMs as I have more than enough resources. And KVM seems to has a bright future ahead.

DX90 has no iscsi support. It only supports FC. I connect it directly to the servers as we have no FC switch. I know I can't manage it via web interface. The LUN appears as a local block device like /dev/sg and I can create LVM via console.
 
Hi Rahman,

Do some speed, IO, and RAM/CPU usage tests on both and you can decide that way. I think you will find that openVZ is as flexbile (with some manual tuning and scripts) as KVM is, and I also think you will find performance differences between the two.
 
Thanks for your reply.



Well I am planing to use KVM for all VMs, for both Windows and Linux VMs. Should I not?



Well, I thought I should create some LUN scheme like this; make raid 10 for faster 6xSAS disk and create 2 LUNs on this raid configuration. Also create raid 10 for slower but larger 6xNL-SAS disks and create 2 LUNs on this too. All servers will use 1 faster and one larger LUNs. Thus I can create VMs that need more speed on faster LUN and need more storage on larger LUN. As I said this is the first time I deal with SAN storage and maybe I am missing something. Is this scheme seems OK?

Also I want to ask something else; how many LUNs can I map to one FC-PORT? Can I map multiple LUNs to a FC-PORT or I need one FC Port per LUN? I am asking because I created and mapped a LUN to the port0 of DX90. Then I tried to create another LUN and map it to port0. It didn't worked; it complained the WWN already is in use.
Hi,
you can map many FC-LUNs. In your case, you must map the LUNs to both WWNs of your servers (this is done at FC-switches, with direct attached i'm not sure), so that both servers are able to see the storage.
Then you create a Partition-Map on the Storage (should, not must) from type 8e (LVM), pvcreate and vgcreate. After that you can create this volumegoup in the proxmox-gui - with shared selected.
Therefor you can create kvm-VMs with storage on the FC-raid and the online-migration should also work.

Do you have multipath to the storage? then you must configered it too.

Udo
 
Hi,
you can map many FC-LUNs. In your case, you must map the LUNs to both WWNs of your servers (this is done at FC-switches, with direct attached i'm not sure)

Udo

Thanks udo, this is getting confusing for me :)

DX90 SAN array has 8 FC ports. Each RX300 server has two FC port on each ( emulex hba, one port per card, two cards on each server ). No FC switch. We need to connect them direclty to storage.

Now, I will be glad if you can help me to understand this;

1. Can I assign two LUN/Logical Volume on one FC port of DX90 (like port1)? So with connecting port1 of DX90 to port1 of one server directly should I see both LUNs (like /dev/sg and /dev/sf) on server with one port/one cable? It seems with direct FC connection it is not possible as assigning the second LUN on the same port1 of DX90 with the same WWNs of server1 fails with "WWN already used by this port" error.

2. Can a LUN shared/used by two servers? Again without any fc switch? Is it safe? Or second server should corrupt the data on the LUN?

Sorry for the low quality illustration :) This is how it works now;
sanw.png


What I ask is to see "Raid1-LUN1 + Raid2-LUN1" on server2 via one FC port not two? Is this possible without fc switch? Also should I use lets say; raid1 LUN1 on both servers?

Thanks all of you for your patience and helps.
 
Hi,
like wrote before, i'm use FC only via switch. But you should able to map all LUNs to all raid-ports. Perhaps you must change the mode (FC,FC-Loop).
The WWN should also different, because you see different FC-NICs.
Two (and more) servers can share the same LUN at the same time as volumegroup (not as filesystem). Proxmox takes care that only one server use a logical volume at one time. If you use an cluster-filesystem many servers can also use together a LUN as filesystem (but lvm is very good to use with proxmox).

In your scenario you must install multipath (there are some (old) postings in this forum). You should use at first only one port for each server and if this run well, than configure multipath.

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!