Proxmox clustering and iscsi shared storage

lukasz.matys

Member
Dec 10, 2015
34
3
8
Hello.
We use few proxmox clusters with many successes ;-).

The new dedicated iSCSI storage is redy for production with proxmox clusters.

So, as usually some questions:

1) Sharing single iscsi storage:
- we have one iscsi storage which can be accessible for nodes at cluster1
- we define shared iscsi storage using proxmox wiki page
- cluster nodes uses different disks names:

- node a: sda
- node b: sda, sdb
- node c: sda, sdb, sdc

- now let say that node A will use iscsi lun as shared (not direct), and it will come into OS as 'sdb' disk

The node B/C uses disc 'sdb/sdc'. Can it be any troubles with that on the node B and C?

I can not find a clear doc how the proxmox realize shared storage using lvm.

2) Sharing storage using two storage-nodes with iscsi

I would like to build shared storage without NO single point of failure for proxmox vm-ha hosts.
Of course the problem with copute nodes is resolved, we can define VM with HA support using shared lvm storage. But there is still a problem with storage which is single point of failure.

If we had a second iscsi dedicated storage, could we create an LVM group(at compute node) which consists iscsi disks from two storage-nodes and next create lvm volume for HA-VM with LVM RAID1 volumes?

https://access.redhat.com/documenta..._Administration/raid_volumes.html#create-raid

How to achieve it using proxmox gui?
Best regards.
 
Are you sure that multipath is needed in scenario I wrote?
As I said, we can build LVM group which consist two or more disks...

https://access.redhat.com/documenta...ume_Manager_Administration/mirror_create.html

Next you create lvm raid1 volume selecting disks which are included at specified group,

"The following command creates a mirrored logical volume with a single mirror and a single log that is not mirrored. The volume is 500 MB in size, it is named mirrorlv, and it is carved out of volume group vg0. The first leg of the mirror is on device /dev/sda1, the second leg of the mirror is on device /dev/sdb1, and the mirror log is on /dev/sdc1."

# lvcreate -L 500M -m1 -n mirrorlv vg0 /dev/sda1 /dev/sdb1 /dev/sdc1

(we do have no single point of failure at network layer using stacked switches, and bond interfaces both for cp-nodes and st-nodes connected to the separate stacked switches).

Whot do you think about this?
Regards.
 
I don't know redhat's cluster implementation so on that matter I really have no opinion. But if a storage node fails out will that not cause your mirror to run in degraded mode?
 
Hi.
It is not based on red hat cluster implementation. It simply use lvm fetures based on clvm. Of cource, if one st-node fails, the lvm mirrored volume is in a degraded state (for example one of the st-nodes crash).

# stable lvm mirror

[root@link-08 ~]# lvs -a -o +devices
LV VG Attr LSize Origin Snap% Move Log Copy% Devices
groupfs vg mwi-a- 752.00M groupfs_mlog 100.00 groupfs_mimage_0(0),groupfs_mimage_1(0)
[groupfs_mimage_0] vg iwi-ao 752.00M /dev/sda1(0)
[groupfs_mimage_1] vg iwi-ao 752.00M /dev/sdb1(0)
[groupfs_mlog] vg lwi-ao 4.00M i /dev/sdc1(0)

# failure one of the lvm disks
# dd if=/dev/zero of=/dev/vg/groupfs count=10
10+0 records in
10+0 records out

You can use the lvs command to verify that the device is now a linear device. Because of the failed disk, I/O errors occur.

# lvs -a -o +devices
/dev/sda1: read failed after 0 of 2048 at 0: Input/output error
/dev/sda2: read failed after 0 of 2048 at 0: Input/output error
LV VG Attr LSize Origin Snap% Move Log Copy% Devices
groupfs vg -wi-a- 752.00M /dev/sdb1(0)

At this point you should still be able to use the logical volume, but there will be no mirror redundancy.

When a mirror leg fails, LVM converts the mirrored volume into a linear volume, which continues to operate as before but without the mirrored redundancy. At that point, you can add a new disk device to the system to use as a replacement physical device and rebuild the mirror.

Convert the linear volume back to its original mirrored state.

# lvconvert -m 1 /dev/vg/groupfs /dev/sdi1 /dev/sdb1 /dev/sdc1
# lvs -a -o +devices
LV VG Attr LSize Origin Snap% Move Log Copy% Devices
groupfs vg mwi-a- 752.00M groupfs_mlog 68.62 groupfs_mimage_0(0),groupfs_mimage_1(0)
[groupfs_mimage_0] vg iwi-ao 752.00M /dev/sdb1(0)
[groupfs_mimage_1] vg iwi-ao 752.00M /dev/sdi1(0)
[groupfs_mlog] vg lwi-ao 4.00M /dev/sdc1(0)

I will test it ;-).
Regards.
 
1) Sharing single iscsi storage:
- we have one iscsi storage which can be accessible for nodes at cluster1
- we define shared iscsi storage using proxmox wiki page
- cluster nodes uses different disks names:

- node a: sda
- node b: sda, sdb
- node c: sda, sdb, sdc

The answer in pictures :) Look:

pic1.png


and fill in the required field:


pic2.png

This will give the cluster shared storage without clutter of discs on nodes.

--
Best regards,
Gosha
 
Hi.
I have read proxmox wiki, and they have some different suggestions no to use luns directly,

LVM Groups with Network Backing
In this configuration, network block devices (iSCSI targets) are used as the physical volumes for LVM logical volume storage. This is a two step procedure and can be fully configured via the web interface.

  1. First, add the iSCSI target. (On some iSCSI targets you need to add the IQN of the Proxmox VE server to allow access.)
    • Click 'Add iSCSI Target' on the Storage list
    • As storage name use whatever you want but take care, this name cannot be changed later.
    • Give the 'Portal' IP address or servername and scan for unused targets
    • disable 'use LUNs direcly'
    • Click save
  2. Second, add an LVM group on this target.
    • Click 'Add LVM Group' on the Storage list
    • As storage name use whatever you want but take care, this name cannot be changed later.
    • For 'Base Storage', use the drop down menu to select the previously defined iSCSI target.
    • For 'Base Volume' select a LUN
    • For 'Volume Group Name' give a unique name (this name cannot be changed later).
    • Enable shared use (recommended)
    • Click save

Regards.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!