Need some serous help quickly please

Inlakesh

Well-Known Member
Jan 13, 2019
74
4
48
I installed ifupdown2 and restarted my proxmox, now I get this message in cli https://prnt.sc/q8yx05

Don't know what to do to fix this upstart not behaving like so.

I followed the the message and wrote "systemctl status pvesr.service"

And got this response:

● pvesr.service - Proxmox VE replication runner
Loaded: loaded (/lib/systemd/system/pvesr.service; static; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2019-12-11 02:21:00 +07; 33s ago
Process: 90224 ExecStart=/usr/bin/pvesr run --mail 1 (code=exited, status=24)
Main PID: 90224 (code=exited, status=24)
CPU: 418ms

Dec 11 02:21:00 aplusvps1 systemd[1]: Starting Proxmox VE replication runner...
Dec 11 02:21:00 aplusvps1 pvesr[90224]: Unable to create new inotify object: Too many open files at /usr/share/perl5/PVE/INotify.pm line 397.
Dec 11 02:21:00 aplusvps1 systemd[1]: pvesr.service: Main process exited, code=exited, status=24/n/a
Dec 11 02:21:00 aplusvps1 systemd[1]: Failed to start Proxmox VE replication runner.
Dec 11 02:21:00 aplusvps1 systemd[1]: pvesr.service: Unit entered failed state.
Dec 11 02:21:00 aplusvps1 systemd[1]: pvesr.service: Failed with result 'exit-code'.


Any ideas what to configure to get this working properly?
 
Last edited:
Hi,

it looks like ifupdow2 has problems to come up.
What Proxmox VE version do you use?
Code:
pveversion -v
do you have special networkings?
 
pveversion -v
proxmox-ve: 5.4-2 (running kernel: 4.15.18-23-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-11
pve-kernel-4.15.18-23-pve: 4.15.18-51
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-17-pve: 4.15.18-43
pve-kernel-4.15.18-16-pve: 4.15.18-41
pve-kernel-4.15.18-15-pve: 4.15.18-40
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-13-pve: 4.15.18-37
pve-kernel-4.15.18-12-pve: 4.15.18-36
pve-kernel-4.15.18-10-pve: 4.15.18-32
pve-kernel-4.15.18-9-pve: 4.15.18-30
ceph: 12.2.12-pve1
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-56
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-7
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-41
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-54
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2

I am using 2 nic for 2 dif IP ranges:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

iface enp96s0f0 inet manual

iface enp96s0f1 inet manual

iface enp97s0f0 inet manual

iface enp97s0f1 inet manual

auto vmbr0
iface vmbr0 inet static
address 25.34.144.245
netmask 255.255.255.128
gateway 25.34.144.129
bridge-ports eno1
bridge-stp off
bridge-fd 0

auto vmbr1
iface vmbr1 inet manual
bridge-ports eno2
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
 
Last edited:
The config looks normal nothing special.

what did you get from the logs?
Code:
journalctl -u networking.service
journalctl -xe
dmesg

This should make no difference in this case but in PVE 5.4 ifupdown2 is experimental.[/Code]
 
Last edited:
Here is the result:
-- Logs begin at Wed 2019-12-11 02:09:06 +07, end at Wed 2019-12-11 15:13:00 +07. --
Dec 11 02:09:08 aplusvps1 systemd[1]: Starting ifupdown2 networking initialization...
Dec 11 02:09:08 aplusvps1 networking[5420]: networking: Configuring network interfaces
Dec 11 02:14:46 aplusvps1 systemd[1]: Started ifupdown2 networking initialization.
Dec 11 15:12:00 aplusvps1 systemd[1]: Started Proxmox VE replication runner.
-- Subject: Unit pvesr.service has finished start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit pvesr.service has finished starting up.
--
-- The start-up result is done.
Dec 11 15:12:03 aplusvps1 sshd[3993938]: Failed password for root from 112.85.42.176 port 34689 ssh2
Dec 11 15:13:00 aplusvps1 systemd[1]: Starting Proxmox VE replication runner...
-- Subject: Unit pvesr.service has begun start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit pvesr.service has begun starting up.
Dec 11 15:13:00 aplusvps1 systemd[1]: Started Proxmox VE replication runner.
-- Subject: Unit pvesr.service has finished start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit pvesr.service has finished starting up.
--
-- The start-up result is done.
Dec 11 15:14:00 aplusvps1 systemd[1]: Starting Proxmox VE replication runner...
-- Subject: Unit pvesr.service has begun start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit pvesr.service has begun starting up.
Dec 11 15:14:00 aplusvps1 systemd[1]: Started Proxmox VE replication runner.
-- Subject: Unit pvesr.service has finished start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit pvesr.service has finished starting up.
--
-- The start-up result is done.
Dec 11 15:14:47 aplusvps1 rrdcached[7467]: flushing old values
Dec 11 15:14:47 aplusvps1 rrdcached[7467]: rotating journals
Dec 11 15:14:47 aplusvps1 rrdcached[7467]: started new journal /var/lib/rrdcached/journal/rrd.journal.1576052087.052408
Dec 11 15:14:47 aplusvps1 rrdcached[7467]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1576044887.052504
Dec 11 15:14:47 aplusvps1 pmxcfs[7488]: [dcdb] notice: data verification successful
Dec 11 15:15:00 aplusvps1 systemd[1]: Starting Proxmox VE replication runner...
-- Subject: Unit pvesr.service has begun start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit pvesr.service has begun starting up.
Dec 11 15:15:00 aplusvps1 systemd[1]: Started Proxmox VE replication runner.
-- Subject: Unit pvesr.service has finished start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit pvesr.service has finished starting up.
--
-- The start-up result is done.
Dec 11 15:15:09 aplusvps1 pvedaemon[3636604]: <root@pam> successful auth for user 'root@pam'
Dec 11 15:16:00 aplusvps1 systemd[1]: Starting Proxmox VE replication runner...
-- Subject: Unit pvesr.service has begun start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit pvesr.service has begun starting up.
Dec 11 15:16:00 aplusvps1 systemd[1]: Started Proxmox VE replication runner.
-- Subject: Unit pvesr.service has finished start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit pvesr.service has finished starting up.
--
-- The start-up result is done.
 
And what is about "dmesg"?
 
OK you got problems with the i40e intel driver.
Do you know witch nic does this driver use?
When you install ifupdown are the errors still exist?
 
I have absolutely no idea which it is, the motherboard have:
Intel® C621 controller for 14 SATA3 (6 Gbps) ports; RAID 0,1,5,10
Dual LAN with GbE
from C621

And the server have an extra nic card installed, but non of the ports on that card is active:
IBM INTEL PRO/1000 PT QUAD PORT PCIe GIGABIT NIC HBA SERVER ADAPTER CARD

I think this problem came when I installed ifupdown2 and restarted the server.
 
Then try to install ifupdown, so you can uninstall ifupdown2.
You would see this way if the problem still exists.
 
I am thinking about what you said regarding the driver to nic. Do you think it can have anything to do with eno2 nic, as we activated that at the same time ifupdown2 was installed?
 
Last edited:
I don't know.
can you send me the HW list

Code:
lspci -vvv
then I could have a look at the used hardware/driver
 
Yes, your both NICs are Intel X722 what uses the i40e driver.
Can you try to boot with an older kernel like the pve-kernel-4.15.18-21-pve: 4.15.18-48?
 
Then try to remove the VLAN-awareness of the vmbr1.
Do do this remove these two lines.
Code:
    bridge-vlan-aware yes
    bridge-vids 2-4094
 
Hi,
it's possible that this nic driver don't support more than 255 vlans when vlan offloading is enabled
(mellanox connect-x3 have the same problem)

https://sourceforge.net/p/e1000/bugs/526/

This to reduce the number of vlans in "bridge-vids" or disable vlan offload : "ethtool -K enoX rx-vlan-filter off"
 
You must remove it and reboot or call "ifdown vmbr1 && ifup vmbr1 -v"
 
  • Like
Reactions: Inlakesh
I run that now and just to verify everything is ok I paste the cli outcome:
(does this look ok?)

:~# ifdown vmbr1 && ifup vmbr1 -v
info: loading builtin modules from ['/usr/share/ifupdown2/addons']
info: module ppp not loaded (module init failed: no /usr/bin/pon found)

info: executing /sbin/sysctl net.bridge.bridge-allow-multiple-vlans
info: executing /bin/pidof mstpd
info: executing /bin/ip rule show
info: executing /bin/ip -6 rule show
info: module ethtool not loaded (module init failed: /sbin/ethtool: not found)

info: address: using default mtu 1500
info: module ethtool not loaded (module init failed: /sbin/ethtool: not found)

info: module ppp not loaded (module init failed: no /usr/bin/pon found)

info: looking for user scripts under /etc/network
info: loading scripts under /etc/network/if-pre-up.d ...
info: loading scripts under /etc/network/if-up.d ...
info: loading scripts under /etc/network/if-post-up.d ...
info: loading scripts under /etc/network/if-pre-down.d ...
info: loading scripts under /etc/network/if-down.d ...
info: loading scripts under /etc/network/if-post-down.d ...
info: processing interfaces file /etc/network/interfaces
info: vmbr1: running ops ...
info: netlink: ip link show
info: netlink: ip addr show
info: executing /bin/ip addr help
info: address metric support: KO
info: vmbr1: netlink: ip link add vmbr1 type bridge
info: vmbr1: apply bridge settings
info: vmbr1: set bridge-fd 0
info: reading '/sys/class/net/vmbr1/bridge/stp_state'
info: vmbr1: netlink: ip link set vmbr1 type bridge with attributes
info: writing '1' to file /proc/sys/net/ipv6/conf/eno2/disable_ipv6
info: executing /bin/ip -force -batch - [link set dev eno2 master vmbr1
addr flush dev eno2
]
info: vmbr1: applying bridge port configuration: ['eno2']
info: executing /sbin/brctl showmcqv4src vmbr1
info: /sbin/brctl showmcqv4src: skipping unsupported command
info: vmbr1: applying bridge configuration specific to ports
info: vmbr1: processing bridge config for port eno2
info: vmbr1: netlink: ip link set dev vmbr1 up
info: reading '/proc/sys/net/mpls/conf/vmbr1/input'
info: executing /sbin/sysctl net.mpls.conf.vmbr1.input=0
info: executing /sbin/sysctl net.ipv4.conf.vmbr1.forwarding
info: executing /sbin/sysctl net.ipv6.conf.vmbr1.forwarding
info: executing /etc/network/if-up.d/postfix
info: executing /etc/network/if-up.d/upstart
info: executing /etc/network/if-up.d/openssh-server
info: running upperifaces (parent interfaces) if available ..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!