error fetching datastores - 500 after upgrade to 2.2

rlljorge

Member
Jun 2, 2020
18
1
8
41
Hi,

My scheduler is random fail, I am receiving the error the intermittent error in different nodes

TASK ERROR: could not activate storage 'pbs02': pbs02: error fetching datastores - 500 Can't connect to 10.250.5.84:8007

The pbs02 is mounted in all nodes of the cluster and I can see the files and info.
If I execute scheduler or individual backup manually works with success.
I have jumbo frame configured and tested in all proxmox nodes and pbs server.
I attached the logs files showing the scheduler backup fail at 12:30 and the manual execution at 12:34 with success for the same vm.

Regards,

Rodrigo L L Jorge
 

Attachments

  • job.txt
    9.3 KB · Views: 9
  • pbs.txt
    12.7 KB · Views: 10
  • node.txt
    1.8 KB · Views: 7

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
7,825
945
163
33
Vienna
hi,

i'd look at your network as you not only get a 'can't connect' but you also get corosync retransmissions, which indicate an overloaded network
 

rlljorge

Member
Jun 2, 2020
18
1
8
41
Hi,

The traffic for the ring network is separate from data network and the ring network does not use for proxmox backup server.

The problem occurs in 12 distinct nodes using the same PBS, the problem initiated after upgrade to 2.2-3.

Can I downgrade to 2.1 ? Are there some way ?

Best Regards,
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
7,825
945
163
33
Vienna
Can I downgrade to 2.1 ? Are there some way ?
no that's not really supported

The traffic for the ring network is separate from data network and the ring network does not use for proxmox backup server.
ok, i'd investigate regardless, not that it becomes a problem later on

The problem occurs in 12 distinct nodes using the same PBS, the problem initiated after upgrade to 2.2-3.
mhm... can you post your network config from pve & pbs ?
 

rlljorge

Member
Jun 2, 2020
18
1
8
41
Hello,

The proxmox network config:

Code:
root@proxmox11:~# pveversion
pve-manager/7.2-4/ca9d43cc (running kernel: 5.15.30-2-pve)

root@proxmox11:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

auto eno3
iface eno3 inet static
        address 192.168.30.139/27
#ring0

auto eno4
iface eno4 inet static
        address 192.168.30.171/27
#ring1

aauto enp4s0f0

auto enp4s0f0
iface enp4s0f0 inet manual
        mtu 1500

auto enp4s0f1
iface enp4s0f1 inet manual
        mtu 1500

auto enp6s0f0
iface enp6s0f0 inet manual
        mtu 9000

auto enp6s0f1
iface enp6s0f1 inet manual
        mtu 9000

auto bond0
iface bond0 inet manual
        bond-slaves eno1 enp4s0f0
        bond-miimon 100
        bond-mode active-backup
        bond-primary eno1

auto bond1
iface bond1 inet manual
        bond-slaves eno2 enp4s0f1
        bond-miimon 100
        bond-mode active-backup
        bond-primary eno2

auto bond2
iface bond2 inet manual
        bond-slaves enp6s0f0 enp6s0f1
        bond-miimon 100
        bond-mode active-backup
        bond-primary enp6s0f0
        mtu 9000

auto bond2.45
iface bond2.45 inet static
        address 10.250.5.43/25
        mtu 9000

auto bond2.51
iface bond2.51 inet static
        address 10.250.6.42/25
        mtu 9000

auto vmbr0
iface vmbr0 inet static
        address 10.1.0.73/24
        gateway 10.1.0.253
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet manual
        bridge-ports bond1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

auto vmbr2
iface vmbr2 inet manual
        bridge-ports bond2
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000

The storage config:
Code:
pbs: pbs02
        datastore backup
        server 10.250.5.84
        content backup
        fingerprint ca:11:18:47:cf:29:4d:3b:1c:b3:43:d8:78:d5:67:3e:d2:35:f7:0b:42:6e:43:f5:1a:09:28:de:00:68:a4:e1
        prune-backups keep-all=1
        username root@pam

Proxmox Backup Server:
Code:
root@pbs02:~# proxmox-backup-manager version
proxmox-backup-server 2.2.3-2 running version: 2.2.3
root@pbs02:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface enp6s0f1 inet manual

iface enp8s0f0 inet manual

iface enp8s0f1 inet manual

auto bond0
iface bond0 inet manual
        bond-miimon 100
        bond-mode active-backup
        bond-primary enp8s0f1
        bond-slaves enp8s0f0 enp8s0f1
        mtu 9000

iface enp6s0f0 inet manual

auto bond0.11
iface bond0.11 inet static
        address 10.1.0.84/24
        gateway 10.1.0.253
        mtu 1500

auto bond0.45
iface bond0.45 inet static
        address 10.250.5.84/25
        mtu 9000

Best Regards,


Rodrigo
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
7,825
945
163
33
Vienna
ok and what else goes over vmbr2/bond2 on the pve side?
are you sure all networking in between has also mtu 9000 set ?

aside from that i cannot really say what's causing this, at least here we don't see this behaviour...
 

rlljorge

Member
Jun 2, 2020
18
1
8
41
Hi,

Vmbr2 is used for traffic NFS from the VM to the Storage NAS, the vms mapping the NFS direct em in some cases.
I checked and the jumbo frame are enabled and works in all nodes.

The problem occurs when the backup job is started by scheduler, when I executed the same job manually works good.

Regards,

Rodrigo L L Jorge
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!