iscsi error after upgrade proxmox 1.6 to 1.7

guillermo

New Member
Hello:
I did in my two proxmox 1.6 a upgrade to de 1.7 version:
>aptitude update
>aptitude safe-upgrade
i did a reset and just in the start post the system show that can not connect with any iscsi target that until the upgrage was working ok.
i can connect with
> iscsiadm -m discovery .....
and i can do a manual conexion:
> iscsiadm -m node -targetname ....
The web interface hangs when i try to see the storages.
any idea why all my iscsi conections are off?.

Thaks
 
Just check your iSCSI node config files are set to automatic

nano /etc/iscsi/nodes/iqn.2006-01.com.openfiler\:tsn.672802aca9d8/10.5.0.6\,3260\,1/default

Just do a tab after /etc/iscsi/nodes to help you complete the path so you end up with something like that above.

If node.startup is set to manual. Change this to automatic eg:

node.startup = automatic

and if node.conn is set to manual. Change this to automatic eg:

node.conn[0].startup = automatic

Just reboot and see if that has any affect.

Also check your syslog with

cat /var/log/messages

and see what the errors are in there - The best way to do this is to unmount anything if you have it manually connected and mounted and deactivate any volume groups you may have on iSCSI, and then run

/etc/init.d/open-iscsi restart

You will get some errors on screen at that point so copy and paste them in here and then run the cat /var/log/messages and post any errors from that in here as well.

Hopefully there will be something in there that helps diagnose the issue further.
 
Thanks .
That was ok. I found the solution by other way and i am going to explain for everyone.
moz-screenshot.png
p { margin-bottom: 0.21cm; } I have 2 proxmox 1.6 in cluster with 2 nic each one. Each one has one nic for vm and the other for a conection with a San( a Openfiler 2.3).
My openfiler has 5 nics. One for gestion( 192.168.40.4) and four in a Bound dedicated a iscsi protocol ( 192.168.10.12).
i use LVM over iscsi. in the OPENFILER.
I did a upgrade to proxmoz ve 1.7:

>aptitude update
>aptitude safe-upgrade.
After reset i could not conect anymore with the San

A cut from syslog Dec 4 11:13:38 PROXMOX1 pvedaemon[2233]: WARNING: command '/usr/bin/iscsiadm --mode node --portal 192.168.40.4:3260 --targetname iqn.2006-01.com.openfiler:tsn.Raptor50GB --login' failed with exit code 255.
From proxmox1 i do a discovery and the openfiler gave me two targets ( like always) one for my vm net 192.168.40.0/24 and other to my dedicated iscsi net 192.168.10.0/24

>iscsiadm -m discovery -t st -p 192.168.10.12:3260
192.168.10.12:3260,1 iqn.2006-01.com.openfiler:tsn.raid6_50_server2008
192.168.10.12:3260,1 iqn.2006-01.com.openfiler:tsn.Raid6_75_vlprograII
192.168.10.12:3260,1 iqn.2006-01.com.openfiler:tsn.raid6_75_vlprogramacion
192.168.10.12:3260,1 iqn.2006-01.com.openfiler:tsn.raid6_30_Zentyal
192.168.10.12:3260,1 iqn.2006-01.com.openfiler:tsn.Raptor50GB
192.168.40.4:3260,1 iqn.2006-01.com.openfiler:tsn.raid6_50_server2008
192.168.40.4:3260,1 iqn.2006-01.com.openfiler:tsn.Raid6_75_vlprograII
192.168.40.4:3260,1 iqn.2006-01.com.openfiler:tsn.raid6_75_vlprogramacion
192.168.40.4:3260,1 iqn.2006-01.com.openfiler:tsn.raid6_30_Zentyal
192.168.40.4:3260,1 iqn.2006-01.com.openfiler:tsn.Raptor50GB.
That was the multipath in the OPENFILER, it must be just one target not two

But the proxmox always try to connect to the 192.168.40.4 target instead of the target 192.168.10.12. so i tried ( and I could not do) to force the connection iscsi in the nic eth1 ( net 192.168.10.0/24). To do
that i created a new iface:

>iscsiadm -m iface -I ifaceRed10 –op new >cat /etc/iscsi/ifaces/ifaceRed10:
iface.iscsi_ifacename= ifaceRed10
iface.net_ifacename=eth1
iface.hwaddress=90: The probEl problema con esto es que usamos iscsi en la misma red que las vmE6:BA:5C:E7:BD
iface,Transport_name= tcp
>icsiadm -m discovery -t st -I ifaceRed10 -p 192.168.10.12:3260 -o update.

However that, the proxmox again try to connect with the target 192.168.40.4.
So there were two solutions for me:

a) the worse: Activate the acl rule in the openfiler to permit protocol in the net 192.168.40.0/24
[root@san etc]# cat /etc/initiators.allow
# PLEASE DO NOT MODIFY THIS CONFIGURATION FILE!
# This configuration file was autogenerated
# by Openfiler. Any manual changes will be overwritten
# Generated at: Sat Dec 4 17:21:15 CET 2010
iqn.2006-01.com.openfiler:tsn.raid6_50_server2008 192.168.10.0/24 192.168.40.0/24
iqn.2006-01.com.openfiler:tsn.Raptor50GB 192.168.10.0/24 192.168.40.0/24
iqn.2006-01.com.openfiler:tsn.raid6_75_vlprogramacion 192.168.10.0/24 192.168.40.0/24
iqn.2006-01.com.openfiler:tsn.raid6_30_Zentyal 192.168.10.0/24 192.168.40.0/24
iqn.2006-01.com.openfiler:tsn.Raid6_75_vlprograII 192.168.10.0/24 192.168.40.0/24
# End of Openfiler configuration.
The problem with this solution is that i use the same net for vm and iscsi . bad performance.

B) The solution:
i left the acl rule for iscsi(/etc/initiators.allow) just for 192.168.10.0/24 net.

and i changed the mutipath in the openfiler to listen just in the net 192.168.10.0/24
[root@san /]# cat /etc/sysconfig/iscsi-target
MEM_SIZE=1048576
LISTEN_ADDR="192.168.10.12"
[root@san /]
[root@san /]# /etc/init.d/iscsi-target restart In the proxmox i left all in the default stage:
delete nodes y register in the databasename:

>iscsiadm -m discovery -p 192.168.40.4:3260 -o delete
>iscsiadm -m discovery -p 192.168.10.12:3260 -o delete
delete the iface

>iscsiadm -m iface -I ifaceRed10 –op delete
Avoid the multipath in proxmox
/etc/iscsi/iscsi.conf :
node.startup=manual
restart
/etc/init.d/open-iscsi restart


After that all was ok:



PROXMOX1:/var/log# iscsiadm -m discovery -t st -p 192.168.10.12:3260
192.168.10.12:3260,1 iqn.2006-01.com.openfiler:tsn.raid6_50_server2008
192.168.10.12:3260,1 iqn.2006-01.com.openfiler:tsn.Raid6_75_vlprograII
192.168.10.12:3260,1 iqn.2006-01.com.openfiler:tsn.raid6_75_vlprogramacion
192.168.10.12:3260,1 iqn.2006-01.com.openfiler:tsn.raid6_30_Zentyal
192.168.10.12:3260,1 iqn.2006-01.com.openfiler:tsn.Raptor50GB
PROXMOX1:/var/log#

just one target.


i hope this will be useful for everyone.
There is one question that i can not resolve:
How to force the proxmox to connect to the target 192.168.10.12 instead 192.168.40.4
 
Dude from your setup I really have no idea what you are trying to acheive.

Why would you even want to map a target on the 192.168.40.x subnet when it is being used as your management network and you only have one NIC in the openfiler bound to that subnet\vlan. You have the other 4 NICS bound to the iSCSI subnet on 192.168.10.x on Openfiler so why would you want any iSCSI connection over the management network at all?

Management network = To connect to devices, servers, interfaces etc to manage said devices.
iSCSI network = For all data and traffic related to iSCSI storage
LAN = For all client\server traffic

In your environment you are setup as follows?

openfiler
1 nic => management network
4 nics => iSCSI network

Proxmox
1 nic => iSCSI network
1 nic => management network or LAN?

You really should look at using vlans on the switch and putting your proxmox into a bond which you would then get better performance on your VM's and the iSCSI network.

What would be even better is if you got yourself an extra PCI NIC with 2x gig ports and had your environment setup as follows:

Openfiler
1 nic => management network
4 nics => bond on iSCSI network with 802.3ad (if you have good switches that support this also, like cisco)

Proxmox
2 nics => bond and vlan trunk on LAN/management network
2 nics => bond on iSCSI network

CHeers
 
Thank mightymouse2045 for this setup.
The solution i have done is because the actual configuration on my proxmox does not give me multipath in the sense that if it can not connect to 192.168.40.4 i must connect with the other path 192.168.10.12 and it does not do that. The same problem i in the post:
http://forum.proxmox.com/threads/5254-After-Upgrade-to-1.7-ISCSI-Connection-Errors

About my setup, you are right:
openfiler
1 nic => management network
4 nics => iSCSI network
Proxmox
1 nic => iSCSI network
1 nic => management network and LAN

Butt it should have been said that my setup is a very low cost, i mean:
I just have one switch 1000 vlan and it has a lot of traficc and just can reach 1000BT and 7000 MTU.
My net 192.168.40.0/24 is used for management and LAN, and is connect to the switch. And the net 192.168.10.12 is used with crossover pair cat6 1000B/T.
The nics on the openfiler can reach 2000BT and 10000 MTU ,so i use four nics to connect to four servers: two proxmox and two iesx.
So, to me is bad performance to send iscsi by the switch.
And it seems well what you said: Use the proxmox nics on a bound and using vlans two diferential teh two nets.

But i must be insistent, that the problem is the multipah configuration on the proxmox.

Thaks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!