Shared storage suggestion for a 5 node cluster?

1) Create a target portal group and only add your 10G interface -> home Comstar
gt.png
Target Portal Groups
gt.png
create portal-group
2) Connect the target portal group form 1) to the target -> home Comstar
gt.png
Target Portal Groups
gt.png
add target

From proxmox test:
iscsiadm -m discovery -t st -p 10.2.2.0:3260 (should display target)
iscsiadm -m discovery -t st -p 10.1.10.41 (should display: iscsiadm: No portals found)
 
target portal groups were already set like that

at gui I did:
comstar >create portal-group . name portal-group-1 use 10.2.2.41

cli shows:
Code:
root@sys4:/root# itadm list-tpg -v
TARGET PORTAL GROUP  PORTAL COUNT
portal-group-1  1
  portals:  10.2.2.41:3260

so the issue had to come before that.

I think that when I set up the iscsi service , that it made iscsi use all interfaces .
Code:
# svcadm enable -r svc:/network/iscsi/target:default
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.

Maybe the svcadm line above needs to specify the interface ?
 
Code:
root@sys4:/root#  itadm list-target -v
TARGET NAME  STATE  SESSIONS
iqn.2010-09.org.napp-it:1459891666  online  6   
  alias:  target-1
  auth:  none (defaults)
  targetchapuser:  -
  targetchapsecret:  unset
  tpg-tags:  default
 
Then we have found the bug;-) Your target is not connected to a target portal group in which case it will be exposed on every interface.
tpg-tags: default means every interface

From my box:
# itadm list-target -v
TARGET NAME STATE SESSIONS
iqn.2010-09.org.napp-it:qdisk online 2
alias: qdisk
auth: none
targetchapuser: -
targetchapsecret: unset
tpg-tags: pve-esx2 = 3,pve-esx1 = 2
iqn.2010-09.org.openindiana:vshare online 16
alias: vshare
auth: none (defaults)
targetchapuser: -
targetchapsecret: unset
tpg-tags: pve-esx2 = 3,pve-esx1 = 2

As can be seen my box exposes two targets, but only to target portal groups pve-esx1 and pve-esx2.

# itadm list-tpg -v
TARGET PORTAL GROUP PORTAL COUNT
pve-esx1 1
portals: 10.0.1.10:3260
pve-esx2 1
portals: 10.0.2.10:3260

You have forgotten this step:
2) Connect the target portal group form 1) to the target -> home Comstar
gt.png
Target Portal Groups
gt.png
add target
 
OK after doing the missing step:
Code:
# itadm list-target -v
TARGET NAME  STATE  SESSIONS
iqn.2010-09.org.napp-it:1459891666  online  6   
  alias:  target-1
  auth:  none (defaults)
  targetchapuser:  -
  targetchapsecret:  unset
  tpg-tags:  portal-group-1 = 2

Code:
#  itadm list-tpg -v
TARGET PORTAL GROUP  PORTAL COUNT
portal-group-1  1   
  portals:  10.2.2.41:3260

and from pve:
Code:
# iscsiadm -m discovery -t st -p 10.1.10.41:3260
iscsiadm: No portals found

# iscsiadm -m discovery -t st -p 10.2.2.41:3260
10.2.2.41:3260,2 iqn.2010-09.org.napp-it:1459891666

so now only the 10G 10.2.2.41 network is used.

thank you.
 
Mir

do you know how to set MTU to 9000 at omnios ?

I searched and came up with this which does not work:
Code:
# ifconfig ixgbe0 mtu 9000
ifconfig: setifmtu: SIOCSLIFMTU: ixgbe0: Invalid argument
 
What do you mean by 'transit the internet'? My storage is connected to proxmox on a closed network.


I have made some performance tests in this thread:
https://forum.proxmox.com/threads/iscsi-san-presented-as-nfs-using-freenas.26679/#post-133999
I was ta
What do you mean by 'transit the internet'? My storage is connected to proxmox on a closed network.


I have made some performance tests in this thread:
https://forum.proxmox.com/threads/iscsi-san-presented-as-nfs-using-freenas.26679/#post-133999
Oh I didn't realize omnios was a hardware product. I thought you were using napp-it as a cloud-based storage solution.
 
Here is another issue , I may have done something wrong or lvm conf filter needs adjusting:
Found duplicate PV:
Code:
sys5  ~ # pvdisplay
  Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
  --- Physical volume ---
  PV Name  /dev/sdk
  VG Name  iscsi-lxc-vg
  PV Size  300.00 GiB / not usable 4.00 MiB
  Allocatable  yes
  PE Size  4.00 MiB
  Total PE  76799
  Free PE  72701
  Allocated PE  4098
  PV UUID  JxUvGz-KqhY-A6XZ-Aacc-4KrB-YNcT-q2DgDN

I had noticed on a very slow kvm restore:
Code:
progress 96% (read 16500785152 bytes, duration 18018 sec)
progress 97% (read 16672620544 bytes, duration 18382 sec)
progress 98% (read 16844521472 bytes, duration 18752 sec)
progress 99% (read 17016422400 bytes, duration 19129 sec)
progress 100% (read 17188257792 bytes, duration 19311 sec)
total bytes read 17188257792, sparse bytes 7608152064 (44.3%)
space reduction due to 4K zero blocks 1.42%
Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
TASK OK

Has anyone seen that on a working iscsi system?

the issue could be caused by restoring a backup , and running the original and backup kvm on the same system [ with just nic changed ]. I'll look for the link that mentioned it.
 
Mr Mir: that can't be done when services are using the nic.

does something like this work to reboot maint mode: init 1 .

or should I use grub at start of boot. [ I assume it is grub ].

This is probably the answer - I'll test tomorrow.
https://docs.oracle.com/cd/E23824_01/html/E24456/sysrecover-1.html
Code:
# init 0
ok boot -s

Boot device: /pci@780/pci@0/pci@9/scsi@0/disk@0,0:a File and args: -s
SunOS Release 5.11 Version 11.0 64-bit
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights
reserved.
Booting to milestone "milestone/single-user:default".
Hostname: tardis.central
Requesting System Maintenance Mode
SINGLE USER MODE

Enter user name for system maintenance (control-d to bypass): root
Enter root password (control-d to bypass): xxxxxxx
single-user privilege assigned to root on /dev/console.
Entering System Maintenance Mode
 
dladm set-linkprop -p mtu=9000 ixgbe0

I booted maintenance mode, and got error trying to run that as device was busy.
this is what I did to set mtu 9600:

in /kernel/drv/ixgbe.conf
set default_mtu = 9000;
then reboot
result:
Code:
#  dladm show-linkprop -p mtu ixgbe0
LINK  PROPERTY  PERM VALUE  DEFAULT  POSSIBLE
ixgbe0  mtu  rw  9000  1500  1500-15500
 
Here is another issue , I may have done something wrong or lvm conf filter needs adjusting:
Found duplicate PV:
Code:
sys5  ~ # pvdisplay
  Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
  --- Physical volume ---
  PV Name  /dev/sdk
  VG Name  iscsi-lxc-vg
  PV Size  300.00 GiB / not usable 4.00 MiB
  Allocatable  yes
  PE Size  4.00 MiB
  Total PE  76799
  Free PE  72701
  Allocated PE  4098
  PV UUID  JxUvGz-KqhY-A6XZ-Aacc-4KrB-YNcT-q2DgDN

I had noticed on a very slow kvm restore:
Code:
progress 96% (read 16500785152 bytes, duration 18018 sec)
progress 97% (read 16672620544 bytes, duration 18382 sec)
progress 98% (read 16844521472 bytes, duration 18752 sec)
progress 99% (read 17016422400 bytes, duration 19129 sec)
progress 100% (read 17188257792 bytes, duration 19311 sec)
total bytes read 17188257792, sparse bytes 7608152064 (44.3%)
space reduction due to 4K zero blocks 1.42%
Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
TASK OK

Has anyone seen that on a working iscsi system?

the issue could be caused by restoring a backup , and running the original and backup kvm on the same system [ with just nic changed ]. I'll look for the link that mentioned it.

1- I fixed slow restore by adding a write-log drive to the pool. [ pool is a 5 drive ssd raidz1 ]
the restore was 22 times faster:
Code:
# no write-log
progress 99% (read 17016422400 bytes, duration 19129 sec)
progress 100% (read 17188257792 bytes, duration 19311 sec)
total bytes read 17188257792, sparse bytes 7608152064 (44.3%)
space reduction due to 4K zero blocks 1.42%
Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
TASK OK

# Do again after write-log added:
progress 99% (read 17016422400 bytes, duration 861 sec)
progress 100% (read 17188257792 bytes, duration 869 sec)
total bytes read 17188257792, sparse bytes 7606726656 (44.3%)
space reduction due to 4K zero blocks 1.4%
  Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
  Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
TASK OK

2- Still need to solve duplicate PV issue.
Mir - any quick clues on how to fix that.
 
Last edited:
2- Still need to solve duplicate PV issue.
Mir - any quick clues on how to fix that.
I am not an expert on LVM but I think your problem relates to that meta data is not refreshed since the same disk for exposed twice (meta data is stored on the disks). Maybe see this: https://forum.proxmox.com/threads/issue-with-lvm-local-storage-found-duplicate-pv.20292/#post-103410

Maybe first try this:
rm /etc/lvm/cache/.cache
and regenerate it with vgscan so as not to contain old stuff
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!