Shared storage suggestion for a 5 node cluster?

My recommendation would be to create two different storages in proxmox, one using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc. and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc. Pay attention to these to important recommendations:
  • disable 'use LUNs direcly'
  • Enable shared use (recommended)
All the above can be done from a single zfs pool.
Manually create a volume and share this volume through an iscsi target. Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing. Use the same zfs pool for configuration of the zfs_over_iscsi storage to use for KVM's. The zfs_over_iscsi plugin will not overwrite the zvol used for your iscsi target for lvm storage. In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA. live migration for lxc is still in the making but will enter proxmox before you know it;-)
 
I just discovered what napp-it is. I am doing a similar storage setup. What are you doing to protect your VM disk images while they transit the internet?

How is the performace of your VMs with this setup? I would imagine that every VM disk read/write operation takes significantly longer than locally stored images.....
 
I just discovered what napp-it is. I am doing a similar storage setup. What are you doing to protect your VM disk images while they transit the internet?
What do you mean by 'transit the internet'? My storage is connected to proxmox on a closed network.

How is the performace of your VMs with this setup? I would imagine that every VM disk read/write operation takes significantly longer than locally stored images.....
I have made some performance tests in this thread:
https://forum.proxmox.com/threads/iscsi-san-presented-as-nfs-using-freenas.26679/#post-133999
 
My recommendation would be to create two different storages in proxmox, one using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc. and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc. Pay attention to these to important recommendations:
  • disable 'use LUNs direcly'
  • Enable shared use (recommended)
All the above can be done from a single zfs pool.
Manually create a volume and share this volume through an iscsi target. Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing. Use the same zfs pool for configuration of the zfs_over_iscsi storage to use for KVM's. The zfs_over_iscsi plugin will not overwrite the zvol used for your iscsi target for lvm storage. In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA. live migration for lxc is still in the making but will enter proxmox before you know it;-)

I am a little confused on setting up the LVM .


"Manually create a volume and share this volume through an iscsi target"

is the volume created on napp-it ?

then at pve use storage > add > iscsi ?
 
OK getting close .
I'm stuck at 'add an LVM group on this target.' .

here I've done so far to try setup

0- for kvm use zfs over iscsi , storage.cfg result:
Code:
zfs: iscsi-sys4
  target iqn.2010-09.org.napp-it:1459891666
  iscsiprovider comstar
  blocksize 8k
  portal 10.2.2.41
  pool data
  content images
  nowritecache

1-Manually create a volume

napp-it disks > volumes < create volume : name lvmvol , size 300G , un check thin provisioned.


2- share this volume through an iscsi target. pve storage > add iscsi >
storage.cfg result:
Code:
iscsi: sys4-lvmvol
  target iqn.2010-09.org.napp-it:1459891666
  portal 10.2.2.41
  content none

3- add an LVM group on this target.
storage > add LVM
name: iscsi-lvm-for-lxc

For 'Base Storage', use the drop down menu to select the previously defined iSCSI target.
sys4-lvmvol (iSCSI)

For 'Base Volume' select a LUN

**there are none to choose from ** <<<<<<<<<<<<<<<<< Issue to fix. must have skipped a step or did wrong. TBD
 
In step 1 you are missing some steps:
1b) home Comstar
gt.png
Logical Units
gt.png
create volume LU
1c) home Comstar
gt.png
Views
gt.png
add view

Add 1b: Choose the volume created in 1a to create a LUN from.
Add 1c: Choose the LUN created in 1b to add a view to.

Your LUN should now be visible from proxmox for usage as base volume
 
In step 1 you are missing some steps:
1b) home Comstar
gt.png
Logical Units
gt.png
create volume LU
1c) home Comstar
gt.png
Views
gt.png
add view

Add 1b: Choose the volume created in 1a to create a LUN from.
Add 1c: Choose the LUN created in 1b to add a view to.

Your LUN should now be visible from proxmox for usage as base volume

progress.

does this look sane?

Code:
zfs: iscsi-sys4
  target iqn.2010-09.org.napp-it:1459891666
  iscsiprovider comstar
  blocksize 8k
  portal 10.2.2.41
  pool data
  content images
  nowritecache

iscsi: sys4-lvmvol
  target iqn.2010-09.org.napp-it:1459891666
  portal 10.2.2.41
  content none

lvm: iscsi-lvm-for-lxc
  vgname iscsi-lxc-vg
  base sys4-lvmvol:0.0.0.scsi-3600144f000000808000057056d6d0001
  content rootdir
  shared

Code:
# service open-iscsi restart
# dmesg -c

[125391.042821] Loading iSCSI transport class v2.0-870.
[125391.048593] iscsi: registered transport (tcp)
[125391.066692] iscsi: registered transport (iser)
[125397.333065] scsi host11: iSCSI Initiator over TCP/IP
[125397.340885] scsi host12: iSCSI Initiator over TCP/IP
[125397.850368] scsi 12:0:0:0: Direct-Access  SUN  COMSTAR  1.0  PQ: 0 ANSI: 5
[125397.850498] scsi 11:0:0:0: Direct-Access  SUN  COMSTAR  1.0  PQ: 0 ANSI: 5
[125397.851413] sd 12:0:0:0: Attached scsi generic sg7 type 0
[125397.851659] sd 11:0:0:0: Attached scsi generic sg8 type 0
[125397.851927] sd 12:0:0:0: [sdh] 629145600 512-byte logical blocks: (322 GB/300 GiB)
[125397.852395] sd 11:0:0:0: [sdi] 629145600 512-byte logical blocks: (322 GB/300 GiB)
[125397.853221] sd 12:0:0:0: [sdh] Write Protect is off
[125397.853225] sd 12:0:0:0: [sdh] Mode Sense: 53 00 00 00
[125397.853497] sd 12:0:0:0: [sdh] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[125397.853693] sd 11:0:0:0: [sdi] Write Protect is off
[125397.853695] sd 11:0:0:0: [sdi] Mode Sense: 53 00 00 00
[125397.854146] sd 11:0:0:0: [sdi] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[125397.857212] sd 12:0:0:0: [sdh] Attached SCSI disk
[125397.859966] sd 11:0:0:0: [sdi] Attached SCSI disk
PS: I'll try to make a corrected step by step document. should it stay here or go to wiki?
 
Since you have been able to create the VG there must be a connection.

PS. Do you export two LUN's of equal size or is it the same LUN connected twice?
In later case this is dangerous.
.
thanks for catching that.

supposed to have just one LUN. the other must be hang over from earlier attempt . Will try to fix..

this is a test system, I'll start over following updated instructions later on.
 
PS. Do you export two LUN's of equal size or is it the same LUN connected twice?
In later case this is dangerous.

I assume you are referring to the output from 'service open-iscsi restart' showing 2 'disks' [ sdh and sdi ]? I'm not familiar with how that is supposed to look.
Code:
[125397.851927] sd 12:0:0:0: [sdh] 629145600 512-byte logical blocks: (322 GB/300 GiB)
[125397.852395] sd 11:0:0:0: [sdi] 629145600 512-byte logical blocks: (322 GB/300 GiB)

napp-it shows just one logical-unit .
storage.cfg shows one lvm for iscsi .

any clues on where to remove the extra unit?
 
Are there multiple paths to the storage? Misconfigured multipath could cause the problem you are experiencing.

I'll study up on ' multipath ' .

is there a menu on napp-it for multipath? [ I could not find it. ]

napp-it has two interfaces . I tried to make it so only the storage network IP was used at:
comstar >create portal-group . name portal-group-1 ( use 10.2.2.41 storage network )

prior to setting up iscsi at nappit gui , I did this from cli :
Code:
svcadm enable -r svc:/network/iscsi/target:default
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.
I am not sure if the warning about 'multiple instances' need to be dealt with.
 
Unless you have created two views to the LUN in omnios the problem has to be found on the proxmox side.
 
Last edited:
Unless you have created two views to the LUN in omnios the problem has to be found on the proxmox side.

there is only one view at omnios

at pve /etc/iscsi/nodes on pve, both IP addresses have configuration set up:
Code:
# ls -lR /etc/iscsi/nodes
/etc/iscsi/nodes:
total 1
drw------- 4 root root 4 Apr  6 16:37 iqn.2010-09.org.napp-it:1459891666/

/etc/iscsi/nodes/iqn.2010-09.org.napp-it\:1459891666:
total 1
drw------- 2 root root 3 Apr  6 16:37 10.1.10.41,3260,1/
drw------- 2 root root 3 Apr  6 16:37 10.2.2.41,3260,1/

/etc/iscsi/nodes/iqn.2010-09.org.napp-it\:1459891666/10.1.10.41,3260,1:
total 5
-rw------- 1 root root 1839 Apr  6 16:37 default

/etc/iscsi/nodes/iqn.2010-09.org.napp-it\:1459891666/10.2.2.41,3260,1:
total 5
-rw------- 1 root root 1838 Apr  6 16:37 default

systemctl status iscsi :
Code:
# systemctl -l status  iscsi
● open-iscsi.service - LSB: Starts and stops the iSCSI initiator services and logs in to default targets
  Loaded: loaded (/etc/init.d/open-iscsi)
  Drop-In: /lib/systemd/system/open-iscsi.service.d
  └─fix-systemd-deps.conf
  Active: active (running) since Wed 2016-04-06 16:37:13 EDT; 13h ago
  Process: 23391 ExecStop=/etc/init.d/open-iscsi stop (code=exited, status=0/SUCCESS)
  Process: 23378 ExecStop=/etc/init.d/umountiscsi.sh stop (code=exited, status=0/SUCCESS)
  Process: 23449 ExecStart=/etc/init.d/open-iscsi start (code=exited, status=0/SUCCESS)
  CGroup: /system.slice/open-iscsi.service
  ├─23465 /usr/sbin/iscsid
  └─23466 /usr/sbin/iscsid

Apr 06 16:37:13 sys5 open-iscsi[23449]: Starting iSCSI initiator service: iscsidln: failed to create symbolic link ‘/run/sendsigs.omit.d/iscsid.pid’: File exists
Apr 06 16:37:13 sys5 open-iscsi[23449]: .
Apr 06 16:37:13 sys5 open-iscsi[23449]: Setting up iSCSI targets:
Apr 06 16:37:13 sys5 open-iscsi[23449]: iscsiadm: No records found
Apr 06 16:37:13 sys5 open-iscsi[23449]: .
Apr 06 16:37:13 sys5 open-iscsi[23449]: Mounting network filesystems:.
Apr 06 16:37:13 sys5 open-iscsi[23449]: Enabling network swap devices:.
Apr 06 16:37:14 sys5 iscsid[23465]: iSCSI daemon with pid=23466 started!
Apr 06 16:37:15 sys5 iscsid[23465]: Connection1:0 to [target: iqn.2010-09.org.napp-it:1459891666, portal: 10.2.2.41,3260] through [iface: default] is operational now
Apr 06 16:37:15 sys5 iscsid[23465]: Connection2:0 to [target: iqn.2010-09.org.napp-it:1459891666, portal: 10.1.10.41,3260] through [iface: default] is operational now

I assume the two IP based iscsi configurations is the cause of 'same LUN connected twice' ?
 
Yes, your omnios box is accessible to proxmox from two different IP's:
- 10.1.10.41
- 10.2.2.41

I agree.

however above you wrote:
'Are there multiple paths to the storage? Misconfigured multipath could cause the problem you are experiencing.'

could the two different IP connections cause 'Misconfigured multipath'
 
If two paths exists to your storage and this is intentional you must install multipath on every proxmox host otherwise chances are eminent that you will mess-up your storage. Alternative you can create a bond to your storage. If the bond is to provide real HA it must span over two switches and to be able to do this it will require stackable switches.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!