Best practice configuring storage

nasenmann72

Renowned Member
Dec 9, 2008
71
2
73
Germany, Saarland
Hi!

We have a PVE cluster with two nodes running. Now we want to put the PVE disk images on a SAN (Openfiler based storage server). We have realized a separate storage LAN with an HP Procurve 1800-24G switch. So each node is connected with a 2 x 1 GbE Trunk to that switch. The storage server is connected over a 4 x 1 GbE trunk to that switch. LACP (802.3.ad) is configured on the the nodes and the switch. On the storage server we have a 1 TB RAID 10 array which should contain the virtual hard disks.

Which of the following ways is the best option to configure PVE/SAN in view of best performance/managebility?

1) create *one* ISCSI target of the 1 TB array on the storage server; configure this ISCSI target in PVE; configure a LVM group for every virtual machine in PVE on the ISCSI target.

Question: will the node use every two interfaces of the trunk when accessing the iscsi device? I noticed that in a 802.3ad trunk only one interface is used by one virtual machine, so maximal bandwith is only 1 Gbit although 2 Gbit should be possible.

This way the configuration of ISCSI devices will be easier to manage because you only have one ISCSI device.


2) create an ISCSI target for every virtual machine on the storage server; configure for every vm an ISCSI target and a LVM group on the PVE cluster.

This way we will get a whole forest of ISCSI devices an LVM groups on PVE and the storage server, which will be not that easy to manage. But I think the load balance on the trunks we be better because the nodes see more virtual hard disks and so the access to those will be better distributed to the trunk members.

Can anyone give some hints to that?

Best regards,
Der Nasenmann
 
Question: will the node use every two interfaces of the trunk when accessing the iscsi device? I noticed that in a 802.3ad trunk only one interface is used by one virtual machine, so maximal bandwith is only 1 Gbit although 2 Gbit should be possible.

There are 7 different bonding modes, see:

http://www.mjmwired.net/kernel/Documentation/networking/bonding.txt

This way the configuration of ISCSI devices will be easier to manage because you only have one ISCSI device.

You can manage it from the web interfaces, very easy to use.

2) create an ISCSI target for every virtual machine on the storage server; configure for every vm an ISCSI target and a LVM group on the PVE cluster.

This way we will get a whole forest of ISCSI devices an LVM groups on PVE and the storage server, which will be not that easy to manage. But I think the load balance on the trunks we be better because the nodes see more virtual hard disks and so the access to those will be better distributed to the trunk members.

Its difficult to manage that way. Also, I do not see why you should get better performance.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!