open iscsi starts before network

juniper

Renowned Member
Oct 21, 2013
84
0
71
in my installation iscsi starts before network and when i made a reboot system iscsi have to wait network
 
here you can see iSCSI daemon starts before my network interface:


Code:
May 02 10:34:23 ********** iscsid[1854]: iSCSI daemon with pid=1857 started!
May 02 10:34:27 ********** iscsid[1854]: connect to 10.10.200.105:3260 failed (No route to host)
May 02 10:34:27 ********** iscsid[1854]: connect to 10.10.10.10:3260 failed (No route to host)
May 02 10:34:27 ********** iscsid[1854]: connect to 10.10.200.100:3260 failed (No route to host)
May 02 10:34:27 ********** iscsid[1854]: connect to 10.10.200.103:3260 failed (No route to host)
May 02 10:34:27 ********** iscsid[1854]: connect to 10.10.200.101:3260 failed (No route to host)
May 02 10:34:27 ********** iscsid[1854]: connect to 10.10.200.102:3260 failed (No route to host)
May 02 10:34:28 ********** kernel: ixgbe 0000:04:00.0 enp4s0f0: NIC Link is Up 10 Gbps, Flow Control: None
May 02 10:34:28 ********** kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp4s0f0: link becomes ready
May 02 10:34:28 ********** kernel: ixgbe 0000:04:00.1 enp4s0f1: NIC Link is Up 10 Gbps, Flow Control: None
May 02 10:34:28 ********** kernel: vmbr0: port 1(enp4s0f1) entered blocking state
May 02 10:34:28 ********** kernel: vmbr0: port 1(enp4s0f1) entered forwarding state
May 02 10:34:28 ********** kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vmbr0: link becomes ready
May 02 10:34:34 ********** iscsid[1854]: connect to 10.10.200.105:3260 failed (No route to host)
May 02 10:34:34 ********** iscsid[1854]: connect to 10.10.10.10:3260 failed (No route to host)
May 02 10:34:34 ********** iscsid[1854]: connect to 10.10.200.100:3260 failed (No route to host)
May 02 10:34:34 ********** iscsid[1854]: connect to 10.10.200.103:3260 failed (No route to host)
May 02 10:34:34 ********** iscsid[1854]: connect to 10.10.200.101:3260 failed (No route to host)
May 02 10:34:34 ********** iscsid[1854]: connect to 10.10.200.102:3260 failed (No route to host)
May 02 10:34:41 ********** iscsid[1854]: connect to 10.10.200.105:3260 failed (No route to host)
May 02 10:34:41 ********** iscsid[1854]: connect to 10.10.10.10:3260 failed (No route to host)
May 02 10:34:41 ********** iscsid[1854]: connect to 10.10.200.100:3260 failed (No route to host)
May 02 10:34:41 ********** iscsid[1854]: connect to 10.10.200.103:3260 failed (No route to host)
May 02 10:34:41 ********** iscsid[1854]: connect to 10.10.200.101:3260 failed (No route to host)
May 02 10:34:41 ********** iscsid[1854]: connect to 10.10.200.102:3260 failed (No route to host)
May 02 10:34:48 ********** iscsid[1854]: connect to 10.10.200.105:3260 failed (No route to host)
May 02 10:34:48 ********** iscsid[1854]: connect to 10.10.10.10:3260 failed (No route to host)
May 02 10:34:48 ********** iscsid[1854]: connect to 10.10.200.100:3260 failed (No route to host)
May 02 10:34:48 ********** iscsid[1854]: connect to 10.10.200.103:3260 failed (No route to host)
May 02 10:34:48 ********** iscsid[1854]: connect to 10.10.200.101:3260 failed (No route to host)
May 02 10:34:48 ********** iscsid[1854]: connect to 10.10.200.102:3260 failed (No route to host)
May 02 10:34:55 ********** iscsid[1854]: connect to 10.10.200.105:3260 failed (No route to host)
May 02 10:34:55 ********** iscsid[1854]: connect to 10.10.10.10:3260 failed (No route to host)
May 02 10:34:55 ********** iscsid[1854]: connect to 10.10.200.100:3260 failed (No route to host)
May 02 10:34:55 ********** iscsid[1854]: connect to 10.10.200.103:3260 failed (No route to host)
May 02 10:34:55 ********** iscsid[1854]: connect to 10.10.200.101:3260 failed (No route to host)
May 02 10:34:55 ********** iscsid[1854]: connect to 10.10.200.102:3260 failed (No route to host)
May 02 10:35:00 ********** systemd[1]: Starting Proxmox VE replication runner...
May 02 10:35:00 ********** kernel: scsi host11: iSCSI Initiator over TCP/IP
May 02 10:35:00 ********** kernel: scsi host12: iSCSI Initiator over TCP/IP
May 02 10:35:00 ********** kernel: scsi host13: iSCSI Initiator over TCP/IP
May 02 10:35:00 ********** kernel: scsi host14: iSCSI Initiator over TCP/IP
May 02 10:35:00 ********** kernel: scsi host15: iSCSI Initiator over TCP/IP
 
Your open-iscsi.service should contain a 'After=network-online.target ...' in order for the service being started after the network is up.
Check `systemctl show open-iscsi.service | grep After`

The same should be true for `systemctl show iscsid.service | grep After`
 
Last edited:
Code:
root@**********:/var/log# systemctl show open-iscsi.service | grep After
RemainAfterExit=yes
After=system.slice multipathd.service iscsid.service network-online.target systemd-journald.socket
 
Code:
root@**********:/etc/iscsi# systemctl show iscsid.service | grep After
RemainAfterExit=no
After=network-online.target system.slice systemd-journald.socket network.target multipathd.service
 
hmm - it does work in my local testsetup here without any modifications:
```
May 02 19:21:46 pve-local-03 systemd[1]: Reached target Network.
May 02 19:21:46 pve-local-03 systemd[1]: Reached target Network is Online.
May 02 19:21:46 pve-local-03 systemd[1]: Starting iSCSI initiator daemon (iscsid)...
May 02 19:21:46 pve-local-03 systemd[1]: Starting LXC network bridge setup...
May 02 19:21:46 pve-local-03 iscsid[1183]: iSCSI logger with pid=1194 started!
May 02 19:21:46 pve-local-03 systemd[1]: iscsid.service: Failed to read PID from file /run/iscsid.pid: Invalid argument
May 02 19:21:46 pve-local-03 iscsid[1194]: iSCSI daemon with pid=1195 started!
May 02 19:21:46 pve-local-03 systemd[1]: Started iSCSI initiator daemon (iscsid).
May 02 19:21:46 pve-local-03 systemd[1]: Starting Login to default iSCSI targets...
```

please post the output of:
* `cat /etc/network/interfaces`
* the complete log for the boot
* `systemctl -a |grep -i iscsi`
* ` dpkg -l |grep -i iscsi`

do you have any other specifics in your setup ? (e.g. using multipath)?
 
hmm - it does work in my local testsetup here without any modifications:
```
May 02 19:21:46 pve-local-03 systemd[1]: Reached target Network.
May 02 19:21:46 pve-local-03 systemd[1]: Reached target Network is Online.
May 02 19:21:46 pve-local-03 systemd[1]: Starting iSCSI initiator daemon (iscsid)...
May 02 19:21:46 pve-local-03 systemd[1]: Starting LXC network bridge setup...
May 02 19:21:46 pve-local-03 iscsid[1183]: iSCSI logger with pid=1194 started!
May 02 19:21:46 pve-local-03 systemd[1]: iscsid.service: Failed to read PID from file /run/iscsid.pid: Invalid argument
May 02 19:21:46 pve-local-03 iscsid[1194]: iSCSI daemon with pid=1195 started!
May 02 19:21:46 pve-local-03 systemd[1]: Started iSCSI initiator daemon (iscsid).
May 02 19:21:46 pve-local-03 systemd[1]: Starting Login to default iSCSI targets...
```

please post the output of:
* `cat /etc/network/interfaces`
* the complete log for the boot
* `systemctl -a |grep -i iscsi`
* ` dpkg -l |grep -i iscsi`

do you have any other specifics in your setup ? (e.g. using multipath)?

Code:
root@*********:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

auto enp4s0f0
iface enp4s0f0 inet static
    address  10.10.200.154
    netmask  255.255.255.0
    gateway  10.10.200.1
    dns-nameservers 10.10.200.1
    dns-search ***********

iface enp2s0f0 inet manual

iface enp2s0f1 inet manual

iface enp4s0f1 inet manual

auto vmbr0
iface vmbr0 inet static
    address  0.0.0.0
    netmask  255.255.255.0
    bridge-ports enp4s0f1
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094
#Bridge ALL VLAN
Code:
root@********:~# systemctl -a |grep -i iscsi
  dev-disk-by\x2did-lvm\x2dpv\x2duuid\x2dx0DNS8\x2dywk9\x2dpuQn\x2dUa68\x2dsfD2\x2d1L36\x2dh0Wo3f.device                                  loaded    active     plugged   iSCSI_Storage                                                                                                
  dev-disk-by\x2did-scsi\x2d1SYNOLOGYiSCSI_Storage:dfd0ada2\x2d87bc\x2d41af\x2db5e9\x2d7536c904c1ba.device                                loaded    active     plugged   iSCSI_Storage                                                                                                
  dev-disk-by\x2did-scsi\x2d36001405dfd0ada2d87bcd41afdb5e9d7.device                                                                      loaded    active     plugged   iSCSI_Storage                                                                                                
  dev-disk-by\x2did-scsi\x2dSSYNOLOGY_iSCSI_Storage_dfd0ada2\x2d87bc\x2d41af\x2db5e9\x2d7536c904c1ba.device                               loaded    active     plugged   iSCSI_Storage                                                                                                
  dev-disk-by\x2did-wwn\x2d0x6001405dfd0ada2d87bcd41afdb5e9d7.device                                                                      loaded    active     plugged   iSCSI_Storage                                                                                                
  dev-disk-by\x2dpath-ip\x2d10.10.200.105:3260\x2discsi\x2diqn.2000\x2d01.com.synology:storage\x2d1.5ae93b244c\x2dlun\x2d0.device         loaded    active     plugged   iSCSI_Storage                                                                                                
  dev-sdb.device                                                                                                                          loaded    active     plugged   iSCSI_Storage                                                                                                
  sys-devices-platform-host11-session1-target11:0:0-11:0:0:0-block-sdb.device                                                             loaded    active     plugged   iSCSI_Storage                                                                                                
  iscsid.service                                                                                                                          loaded    active     running   iSCSI initiator daemon (iscsid)                                                                              
  open-iscsi.service                                                                                                                      loaded    active     exited    Login to default iSCSI targets


Code:
root@*********:~# dpkg -l |grep -i iscsi
ii  libiscsi7:amd64                      1.17.0-1.1                     amd64        iSCSI client shared library
ii  open-iscsi                           2.0.874-3~deb9u1               amd64        iSCSI initiator tools

and yes i use multipath
 
address 0.0.0.0 netmask 255.255.255.0
that is not necessary/probably won't work - please remove these 2 lines.

the dpkg and systemctl output look like in my setup.

please provide the output of `multipath -ll`

and the (anonymized) log would still be needed.
 
that is not necessary/probably won't work - please remove these 2 lines.

the dpkg and systemctl output look like in my setup.

please provide the output of `multipath -ll`

and the (anonymized) log would still be needed.



Code:
root@********:~# multipath -ll
3600c0ff00028252320c3655c01000000 dm-1 HP,MSA 1040 SAN
size=3.6T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 12:0:0:3 sde 8:64  active ready running
| `- 14:0:0:3 sdk 8:160 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 13:0:0:3 sdh 8:112 active ready running
  `- 15:0:0:3 sdn 8:208 active ready running
3600c0ff00028252304676c5901000000 dm-2 HP,MSA 1040 SAN
size=2.7T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 14:0:0:2 sdj 8:144 active ready running
| `- 12:0:0:2 sdd 8:48  active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 13:0:0:2 sdg 8:96  active ready running
  `- 15:0:0:2 sdm 8:192 active ready running
3600c0ff000282523ff6c0d5a01000000 dm-0 HP,MSA 1040 SAN
size=2.7T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 12:0:0:1 sdc 8:32  active ready running
| `- 14:0:0:1 sdi 8:128 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 13:0:0:1 sdf 8:80  active ready running
  `- 15:0:0:1 sdl 8:176 active ready running

for interfaces configuration change i have to wait until tomorrow morning

Thank you in advance
 
Hi, changing interfaces configuration doesn't solve the problem

during boot i see:

FAILED TO START PROXMOX VE REPLICATION RUNNER

in my opinion pvesr.service start before network is ready:


Code:
root@**********:~# journalctl | grep pvesr
May 03 07:41:01 ********** pvesr[2076]: ipcc_send_rec[1] failed: Connection refused
May 03 07:41:01 ********** pvesr[2076]: ipcc_send_rec[2] failed: Connection refused
May 03 07:41:01 ********** pvesr[2076]: ipcc_send_rec[3] failed: Connection refused
May 03 07:41:01 ********** pvesr[2076]: Unable to load access control list: Connection refused
May 03 07:41:01 ********** systemd[1]: pvesr.service: Main process exited, code=exited, status=111/n/a
May 03 07:41:01 ********** systemd[1]: pvesr.service: Unit entered failed state.
May 03 07:41:01 ********** systemd[1]: pvesr.service: Failed with result 'exit-code'.
May 03 07:42:00 ********** pvesr[2511]: ipcc_send_rec[1] failed: Connection refused
May 03 07:42:00 ********** pvesr[2511]: ipcc_send_rec[2] failed: Connection refused
May 03 07:42:00 ********** pvesr[2511]: ipcc_send_rec[3] failed: Connection refused
May 03 07:42:00 ********** pvesr[2511]: Unable to load access control list: Connection refused
May 03 07:42:00 ********** systemd[1]: pvesr.service: Main process exited, code=exited, status=111/n/a
May 03 07:42:00 ********** systemd[1]: pvesr.service: Unit entered failed state.
May 03 07:42:00 ********** systemd[1]: pvesr.service: Failed with result 'exit-code'.

BTW after system boot all working fine.
 
Last edited:
just for information, my cluster is made by 5 server with the same configuration 3 HP and 2 Huawei

only 2 huawei server have the same problem (huawei server have 10GBE ethernet card)
 
and one question:

Is there a reason for not using systemd-networkd for proxmox network configuration?
 
FAILED TO START PROXMOX VE REPLICATION RUNNER

in my opinion pvesr.service start before network is ready:
hm it seems the service is lacking the dependency on pmxcfs - please open an enhancement request at https://bugzilla.proxmox.com

However these warning messages are not a problem (it would be a problem if you have storage replication configured and they occur after the system is fully booted up)

only 2 huawei server have the same problem (huawei server have 10GBE ethernet card)
maybe it's a problem with the huawei servers then (their 10G nics) - please try to upgrade all firmwares to the latest versions.
do the other servers use 1g or 10g interfaces


Is there a reason for not using systemd-networkd for proxmox network configuration?
PVE parses /etc/network/interfaces for presenting it in the GUI.
 
hm it seems the service is lacking the dependency on pmxcfs - please open an enhancement request at https://bugzilla.proxmox.com

However these warning messages are not a problem (it would be a problem if you have storage replication configured and they occur after the system is fully booted up)


maybe it's a problem with the huawei servers then (their 10G nics) - please try to upgrade all firmwares to the latest versions.
do the other servers use 1g or 10g interfaces



PVE parses /etc/network/interfaces for presenting it in the GUI.

OK i'll open an enhancement request for pvesr.service

Other 3 servers use 1G interface and i'm pretty sure Huawei servers are update but i'll check

Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!