New proxmox 3.1 installation with iscsi network to storage

rantzauer

New Member
Dec 4, 2013
5
0
1
Denmark
Hello

I am completely new to Proxmox, coming from vmware. I am testing proxmox to find out if this is the way to go in the future to minimize costs, without going on compromise with function and performance.

I have installed proxmox on a HP proliant bl460c gen8 server with 10 network cards available eth6-7-8-9 is meant to be used to connect to Iscsi network storage (for now a MSA2000).
My problem is how to configure these network cards correctly.
I have been reading/seeking on the forum and can see I should set up multipathing, but first I need to set up the network cards and Iscsi paths.

My MSA 2000 has two network Ip's 172.16.1.xx and 172.16.2.xxx which of course goes to 2 switches. I know want eth6 and eth8 to be assigned to 172.16.1.xx and eth7 and eth9 to be assigned to 172.16.2.xx

What is the correct way to do this. Should bond eth6 and eth8 and bond eth7 and eth9 together and then create my iscsi targets hereafter. And will I then be able to configure multipathing afterwards?
Or is there a better way to do this?

Hopes this makes sense

Best regards

Rantzauer
location: Denmark
 
Hi

Is there really no one who wants to help me get started best possible with Proxmox?

If I have not been precisely enough or some thing please let me know.

/Rantzauer
 
Hello,

i can provide my experience since i've setup Proxmox in a similar enviroment.
Bonding the iscsi it's useful only if you load balance to achieve more bandwith to the san. For redundancy purposes isn't very useful since multipath will do the rest.
I suggest you di write down a connections schema and try to immagine various faults and what should happen. There is no really only one solution but certainly there is the one that fit perfectly for you.
The bad side of iSCSI on Proxmox is the use of LVM. It's fast but isn't very efficent in the usage of the free space. I've recompiled the proxmox kernel to use OCFS2. Actually is the only option for a clustered FS in Proxmox enviroment.

Regards.
 
Hi,
Thanks for your reply. As I see it then, I think the best for me would be to bond my network card in 2 and 2 as described in my first post, and then make multicast between the two bonding's. If I understand you correctly this will give me more bandwidth to the San, and the multipath will make sure I have redundancy.

You say that LVM is not efficent in using free disk space, can you explain that a little more. How does this affect my system?

Regards

Rantzauer
 
Hello,

yes, bonding will give you more bandwith to the san BUT only if configured correctly. You can get more information here (look at the load balancing part).
Be shure that the san have enough bandwith to satisfy the bonding interfaces! Losing packets on iscsi it's never a good thing.
Multipath will give you redundancy only if you have multiple ethernet ports (or bonding interfaces) to the san.

About LVM, it allow only to partition dynamically the san to the various VM instead allocate only the used disk.
Let me try to explain it better:

Suppose that you have a SAN LUN of 2 Tbyte and you need to create 10 VM with 200 Gb disk each.

With LVM, the LUN will be partitioned in 10 partition 200 GB each fullfilling the SAN. If the VMs will use only 5 Gb each, the remaining 195 Gb will be marked as used
and you will not be allowed to create another one (unless you expand the SAN).

With OCFS2, the SAN will be used by Proxmox as a generic shared directory and you will be allowed to create sparse or qcow2 files to host the VMs disks.
It's works like as NFS Share do but it's much much more faster and efficient.
Using the previous example, you will use only (5 * 10 ) = 50 Gb of the SAN space used instead of 2Tb.
Obviously you can overcommit the disk space but will be your responsability guarantee the availability of the free space.
Even in this eventuality OCFS2 supports online expansion of the volume with 0 downtime!

I've asked to enable the ocfs2 modules on proxmox years ago ever with negative response. Every time they release a new kernel i need to recompile it. It's not an easy way.


Hope it helps,

Regards.
 
Hi,

Thank you very much for clearing this up for me. I think I understand.

And that a big miss with LVM if it works like you say. This makes Proxmox unusable to me. We are coming from vmware, and here we have been able to manage disk and overcommit them, and this is not a feature I am going to give up. And I do not have the know how to compile it to use ocfs2. I think you have just saved me a lot of hours of setup and testing of proxmox before I would have found out of this my self.
I think I for know will look at other alternatives, maybe I end up keep paying the big bucks to vmware.

Again thank you for your help, it is very appreciated.

Best regards
Rantzauer
 
...
The bad side of iSCSI on Proxmox is the use of LVM. It's fast but isn't very efficent in the usage of the free space. I've recompiled the proxmox kernel to use OCFS2. Actually is the only option for a clustered FS in Proxmox enviroment.

Regards.

why don´t you use gfs2 instead of ocfs2?
 
there is a good howto for multipathing
http://pve.proxmox.com/wiki/ISCSI_Multipath

but I suggest you take also a look on a more modern storage system like Ceph RBD instead of a feature limited setup like iSCSI.
distributed storage gives you more features, high availabilty, totally flexible and also much cheaper than any traditional SAN.
 
Hi Tom

Thanks for your input.

I have looked at the ISCSI wiki you point me to, I was just not sure howto configure the networkcards the best way at first.

Regarding Ceph RBD I don't know this, but as I read it, it is made on ordinary hardware with fx. Ubuntu os, and then a Ceph package. If I need a lot of space at least 10tb of space, what would I use to place all these disks?
At the moment I have a MSA2000 with 8tb of space to use, and test on.
We also have a EMC VNX5300 with 17tb of disk space (Also ISCSI), but this is used in production with vmware at the moment. IF I find a solid alternative to vmware this would be availible to. I don't think that I can get the fonds from the chief to go out and buy another storage setup for now, so I guess I am stuck at ISCSI for the moment.

Best regards
Rantzauer
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!