multipathd: uevent trigger error

Sep 28, 2011
34
1
28
Guten Tag,

wir haben eine Systemumgebung aus 3 proxmox-servern sowie einem zentralem Storage welches über iscsi mit den Servern verknüpft ist.
Die 3 Server wurden alle identisch installiert und nach Anleitung in einen Cluster gebracht. Das Storage wurde nach Anleitung per multipathing mit den Servern verknüpft und anschließend per LVM eingebunden.

Nun findet sich jedoch seit einiger Zeit auf 2 von 3 Servern immer wieder der Fehler "multipathd: uevent trigger error" in den Logs, ohne erkennbaren Grund.
Der dritte Server bringt keine Fehler, Konfigdateien wie /etc/multipath.conf /etc/network/interfaces und sonstige sind alle abgeglichen und keine Fehler erkennbar.
Das Storage ist weiterhin verfügbar und wird auf allen paths erreicht, jedoch werden im minutentakt diese Fehler gespammt.

Meine eigene Recherche im Internet blieb leider erfolgreich. Kann jemand helfen? Was bedeutet dieser Fehler genau und wie kann man ihn beheben?

Des Weiteren ist im Webinterface auf der primären vmbr0 das Gateway 192.168.0.1 eingerichtet. Prüfe ich das jedoch auf der Konsole mit "route" erhalte ich folgende Ausgabe:
Code:
root@proxmox01:~# route
Kernel-IP-Routentabelle
Ziel            Router          Genmask         Flags Metric Ref    Use Iface
10.0.0.0        *               255.255.255.0   U     0      0        0 vmbr99
192.168.0.0     *               255.255.0.0     U     0      0        0 vmbr0
Das beeinflusst den laufenden Betrieb zwar nicht, da das Clustering über das extra 10.0.0.0-Netz läuft, aber es ist trotzdem komisch.
Installiert wurde ganz normal nach Anleitung.

installierte Paketversionen siehe unten, Zeiten sind überall synchron
Code:
root@proxmox01:~# pveversion -v
proxmox-ve-2.6.32: 3.4-155 (running kernel: 2.6.32-38-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-38-pve: 2.6.32-155
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: 3.4-5
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
 
Hello,

we have a system environment from 3 Proxmox servers and a central storage is that linked via iscsi with the servers.
The 3 servers were all identical installed and brought into a cluster for instructions. The Storage has been linked via multipathing with the servers and then added via LVM by following the proxmox instructions.

Now for some time on 2 of 3 servers report the error "multipathd: uevent trigger error" in the logs, for no apparent reason.
The third server brings no error, config files as /etc/multipath.conf / etc / network / interfaces and other are all reconciled and no errors recognizable.
The storage is still available and is achieved on all paths, but these errors are spammed by the minute.

Unfortunately, my own research on the Internet was not successful. Can anyone help? What does this error exactly and how can I fix it?

Furthermore, I set the gateway 192.168.0.1 in the web interface on the primary vmbr0. I check the but on the console with "route" I get the following output:
Code:
root@proxmox01:~# route
Kernel-IP-Routentabelle
Ziel            Router          Genmask         Flags Metric Ref    Use Iface
10.0.0.0        *               255.255.255.0   U     0      0        0 vmbr99
192.168.0.0     *               255.255.0.0     U     0      0        0 vmbr0
This doesn't affects the funktionality of the cluster, because clustering is running on the extra 10.0.0.0 network, but it's still strange.
The installation sas been installed as normal following the instructions.

installed package versions see below , all times are in sync
Code:
root@proxmox01:~# pveversion -v
proxmox-ve-2.6.32: 3.4-155 (running kernel: 2.6.32-38-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-38-pve: 2.6.32-155
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: 3.4-5
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
 
what is the contents of /etc/network/interfaces of the two failing servers?
It seems they have dropped their default route which could be the cause for the problem.
 
none of the three servers has a default route.
the fact, that no server has a default route is a second problem, not corresponding with the main problem of uevent trigger errors

proxmox01 reports uevent trigger errors
proxmox02 reports no errors
proxmox03 reports uevent trigger errors

all servers has the same stuctured networkconfig but different ip's for the servers

Code:
root@proxmox01:~# cat /etc/network/interfaces 
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

iface eth4 inet manual

iface eth5 inet manual

iface eth6 inet manual

iface eth7 inet manual

iface eth8 inet manual

iface eth9 inet manual

auto bond0
iface bond0 inet manual
	slaves eth0 eth1
	bond_miimon 100
	bond_mode balance-rr

auto bond1
iface bond1 inet manual
	slaves eth2 eth3
	bond_miimon 100
	bond_mode balance-rr

auto bond2
iface bond2 inet manual
	slaves eth4 eth5
	bond_miimon 100
	bond_mode balance-rr

auto bond99
iface bond99 inet manual
	slaves eth8 eth9
	bond_miimon 100
	bond_mode balance-rr

auto vmbr0
iface vmbr0 inet static
	address  192.168.0.5
	netmask  255.255.0.0
	gateway  192.168.0.1
	bridge_ports bond0
	bridge_stp off
	bridge_fd 0
	mtu 9000
	txqueuelen 10000

auto vmbr1
iface vmbr1 inet manual
	bridge_ports bond1
	bridge_stp off
	bridge_fd 0

auto vmbr2
iface vmbr2 inet manual
	bridge_ports bond2
	bridge_stp off
	bridge_fd 0

auto vmbr99
iface vmbr99 inet static
	address  10.0.0.12
	netmask  255.255.255.0
	bridge_ports bond99
	bridge_stp off
	bridge_fd 0
	mtu 9000
	txqueuelen 10000
 
none of the three servers has a default route.
the fact, that no server has a default route is a second problem, not corresponding with the main problem of uevent trigger errors
That is a wrong assumption. All servers has a default route configured on vmbr0: gateway 192.168.0.1

The problem is why this default route is not shown when running route?
 
all servers have set the gateway 192.168.0.1. the question is why its not recognized by proxmox and used as normal.
The problem is why this default route is not shown when running route?
right thats the question.

But my main problem is the "uevent trigger error" reported by multipathd
Code:
...
May 18 15:13:28 proxmox01 multipathd: uevent trigger error
May 18 15:13:28 proxmox01 multipathd: uevent trigger error
May 18 15:13:28 proxmox01 multipathd: uevent trigger error
May 18 15:13:28 proxmox01 multipathd: uevent trigger error
May 18 15:13:28 proxmox01 multipathd: uevent trigger error
May 18 15:13:28 proxmox01 multipathd: uevent trigger error
May 18 15:13:28 proxmox01 multipathd: uevent trigger error
May 18 15:13:28 proxmox01 multipathd: uevent trigger error
May 18 15:13:28 proxmox01 multipathd: uevent trigger error
May 18 15:13:28 proxmox01 multipathd: uevent trigger error
May 18 15:13:28 proxmox01 multipathd: uevent trigger error
May 18 15:13:28 proxmox01 multipathd: uevent trigger error
May 18 15:13:28 proxmox01 multipathd: uevent trigger error
May 18 15:13:28 proxmox01 multipathd: uevent trigger error
May 18 15:13:28 proxmox01 multipathd: uevent trigger error
May 18 15:13:29 proxmox01 multipathd: uevent trigger error
May 18 15:13:29 proxmox01 multipathd: uevent trigger error
May 18 15:13:29 proxmox01 multipathd: uevent trigger error
May 18 15:13:29 proxmox01 multipathd: uevent trigger error
May 18 15:13:29 proxmox01 multipathd: uevent trigger error
May 18 15:13:29 proxmox01 multipathd: uevent trigger error
May 18 15:13:29 proxmox01 multipathd: uevent trigger error
May 18 15:13:29 proxmox01 multipathd: uevent trigger error
May 18 15:13:29 proxmox01 multipathd: uevent trigger error
...

there's only this error no other hints what causes the error
 
since iscsi and multipath is network related you first need to fix your network before turning to upper layer protocols. Is default route not shown in any of the servers or is it only missing in the servers with multipathd errors?
 
the default route is not shown on any of the servers.
but the gateway/default route is not needed here, because all servers and storage are direct connected through the 10.0.0.0-network.
ip-packets are only send through the default-route, if the host has no ip in the target-network...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!