What I'd like to do is limit iscsi to use the 10G network 10.2.2.0/24
is there a way to limit iscsi to that network in omnios ?
is there a way to limit iscsi to that network in omnios ?
root@sys4:/root# itadm list-tpg -v
TARGET PORTAL GROUP PORTAL COUNT
portal-group-1 1
portals: 10.2.2.41:3260
# svcadm enable -r svc:/network/iscsi/target:default
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.
root@sys4:/root# itadm list-target -v
TARGET NAME STATE SESSIONS
iqn.2010-09.org.napp-it:1459891666 online 6
alias: target-1
auth: none (defaults)
targetchapuser: -
targetchapsecret: unset
tpg-tags: default
# itadm list-target -v
TARGET NAME STATE SESSIONS
iqn.2010-09.org.napp-it:1459891666 online 6
alias: target-1
auth: none (defaults)
targetchapuser: -
targetchapsecret: unset
tpg-tags: portal-group-1 = 2
# itadm list-tpg -v
TARGET PORTAL GROUP PORTAL COUNT
portal-group-1 1
portals: 10.2.2.41:3260
# iscsiadm -m discovery -t st -p 10.1.10.41:3260
iscsiadm: No portals found
# iscsiadm -m discovery -t st -p 10.2.2.41:3260
10.2.2.41:3260,2 iqn.2010-09.org.napp-it:1459891666
# ifconfig ixgbe0 mtu 9000
ifconfig: setifmtu: SIOCSLIFMTU: ixgbe0: Invalid argument
dladm set-linkprop -p mtu=9000 ixgbe0Mir
do you know how to set MTU to 9000 at omnios ?
I searched and came up with this which does not work:
Code:# ifconfig ixgbe0 mtu 9000 ifconfig: setifmtu: SIOCSLIFMTU: ixgbe0: Invalid argument
I was taWhat do you mean by 'transit the internet'? My storage is connected to proxmox on a closed network.
I have made some performance tests in this thread:
https://forum.proxmox.com/threads/iscsi-san-presented-as-nfs-using-freenas.26679/#post-133999
Oh I didn't realize omnios was a hardware product. I thought you were using napp-it as a cloud-based storage solution.What do you mean by 'transit the internet'? My storage is connected to proxmox on a closed network.
I have made some performance tests in this thread:
https://forum.proxmox.com/threads/iscsi-san-presented-as-nfs-using-freenas.26679/#post-133999
dladm set-linkprop -p mtu=9000 ixgbe0
sys5 ~ # pvdisplay
Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
--- Physical volume ---
PV Name /dev/sdk
VG Name iscsi-lxc-vg
PV Size 300.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 76799
Free PE 72701
Allocated PE 4098
PV UUID JxUvGz-KqhY-A6XZ-Aacc-4KrB-YNcT-q2DgDN
progress 96% (read 16500785152 bytes, duration 18018 sec)
progress 97% (read 16672620544 bytes, duration 18382 sec)
progress 98% (read 16844521472 bytes, duration 18752 sec)
progress 99% (read 17016422400 bytes, duration 19129 sec)
progress 100% (read 17188257792 bytes, duration 19311 sec)
total bytes read 17188257792, sparse bytes 7608152064 (44.3%)
space reduction due to 4K zero blocks 1.42%
Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
TASK OK
Mr Mir: that can't be done when services are using the nic.
does something like this work to reboot maint mode: init 1 .
or should I use grub at start of boot. [ I assume it is grub ].
# init 0
ok boot -s
Boot device: /pci@780/pci@0/pci@9/scsi@0/disk@0,0:a File and args: -s
SunOS Release 5.11 Version 11.0 64-bit
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights
reserved.
Booting to milestone "milestone/single-user:default".
Hostname: tardis.central
Requesting System Maintenance Mode
SINGLE USER MODE
Enter user name for system maintenance (control-d to bypass): root
Enter root password (control-d to bypass): xxxxxxx
single-user privilege assigned to root on /dev/console.
Entering System Maintenance Mode
dladm set-linkprop -p mtu=9000 ixgbe0
# dladm show-linkprop -p mtu ixgbe0
LINK PROPERTY PERM VALUE DEFAULT POSSIBLE
ixgbe0 mtu rw 9000 1500 1500-15500
Here is another issue , I may have done something wrong or lvm conf filter needs adjusting:
Found duplicate PV:
Code:sys5 ~ # pvdisplay Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj --- Physical volume --- PV Name /dev/sdk VG Name iscsi-lxc-vg PV Size 300.00 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 76799 Free PE 72701 Allocated PE 4098 PV UUID JxUvGz-KqhY-A6XZ-Aacc-4KrB-YNcT-q2DgDN
I had noticed on a very slow kvm restore:
Code:progress 96% (read 16500785152 bytes, duration 18018 sec) progress 97% (read 16672620544 bytes, duration 18382 sec) progress 98% (read 16844521472 bytes, duration 18752 sec) progress 99% (read 17016422400 bytes, duration 19129 sec) progress 100% (read 17188257792 bytes, duration 19311 sec) total bytes read 17188257792, sparse bytes 7608152064 (44.3%) space reduction due to 4K zero blocks 1.42% Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj TASK OK
Has anyone seen that on a working iscsi system?
the issue could be caused by restoring a backup , and running the original and backup kvm on the same system [ with just nic changed ]. I'll look for the link that mentioned it.
# no write-log
progress 99% (read 17016422400 bytes, duration 19129 sec)
progress 100% (read 17188257792 bytes, duration 19311 sec)
total bytes read 17188257792, sparse bytes 7608152064 (44.3%)
space reduction due to 4K zero blocks 1.42%
Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
TASK OK
# Do again after write-log added:
progress 99% (read 17016422400 bytes, duration 861 sec)
progress 100% (read 17188257792 bytes, duration 869 sec)
total bytes read 17188257792, sparse bytes 7606726656 (44.3%)
space reduction due to 4K zero blocks 1.4%
Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
Found duplicate PV JxUvGzKqhYA6XZAacc4KrBYNcTq2DgDN: using /dev/sdk not /dev/sdj
TASK OK
I am not an expert on LVM but I think your problem relates to that meta data is not refreshed since the same disk for exposed twice (meta data is stored on the disks). Maybe see this: https://forum.proxmox.com/threads/issue-with-lvm-local-storage-found-duplicate-pv.20292/#post-1034102- Still need to solve duplicate PV issue.
Mir - any quick clues on how to fix that.
I would be nice to see some performance tests from inside a LXC container.