PVE cluster fails to connect PBS

barlund

New Member
Jul 8, 2024
4
0
1
Hello, I have several Proxmox virtual environment clusters (8.4) with VMs and I'd like to backup those to one PBS (4.0.15). First pve-cluster1 can be connected to PBS ok, but no luck with other clusters?
1759134734188.png
Seems that in PBS there is no firewall blocking this connectivity, any idea why this is not working? tcpdump from PBS (10.22.69.62) shows that pbs is not answering pve (10.22.69.68) requests:


1759134983393.png
In PBS side, storage disks are logically separated to different disks:
root@pbs:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 446.6G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part
└─sda3 8:3 0 445.6G 0 part
├─pbs-swap 252:4 0 8G 0 lvm [SWAP]
└─pbs-root 252:5 0 421.6G 0 lvm /
sdb 8:16 0 14T 0 disk
├─data-cluster1 252:0 0 1.4T 0 lvm /cluster1
├─data-cluster3 252:1 0 3T 0 lvm /cluster3
├─data-cluster4 252:2 0 3T 0 lvm /cluster4
└─data-cluster5 252:3 0 3T 0 lvm /cluster5

1759135148657.png

BR
Jonas
 
From the screenshot you included, it doesn't necessarily mean that "pbs is not answering pve (10.22.69.68) requests".

What is in the logs in the PBS at the moment you are (unsuccessfully) trying to connect to the PBS? E.g. in journalctl, /var/log/proxmox-backup/api/access.log

What are the permissions in the PBS on "Datastore" tree?

What (in detail) are you doing at the cluster side before you're getting this failure?
 
Try to connect and checked logs. No writings to journalctl nor access.log from 10.22.69.68.
Goes like this:
1759141612022.png

PBS side datastore tree:
1759141392149.png

If there should be found some specific log writings I supposed to hunt, please reply.

Thanks
Jonas
 

Attachments

  • 1759141347584.png
    1759141347584.png
    134.5 KB · Views: 2
If in the "unlucky" PVE host you execute in the shell
telnet 10.22.69.62 8007
- what will you get in the screen?
 
Tried telnet from 2 different PVE hosts.

root@proxm-01-cluster6:~# telnet 10.22.69.62 8007
Trying 10.22.69.62...
Connected to 10.22.69.62.
Escape character is '^]'.

Connection closed by foreign host.



PBS journalctl:
root@pbs:~# journalctl -r

Sep 29 17:23:12 pbs proxmox-backup-proxy[1682]: [[::ffff:10.22.69.68]:39034] failed to check for TLS handshake: timed out while waiting for client to initiate TLS handshake
 
If the message "Connection closed by foreign host" was NOT immediately after you executed telnet, then we at least can assume the network lets the connection through OK.

Another try: from the "unlucky" PVE execute
proxmox-backup-client list --repository "USER@REALM@HOST:DATASTORE"

What gives?

(This suggestion is from a similar thread https://forum.proxmox.com/threads/p...-500-can-not-get-datastores.93854/post-408245
and in that thread the reason were jumbo frames; of course it may be anything other in your situation).

If the result of above proxmox-backup-client command is successful and creating the storage still fails, I would compare very carefully particular "lucky" clusters and nodes with "unlucky" ones - for any differences in settings and configurations (also in attempted storages).
 
ok,

nice hint. Now it works!! I indeed use jumbo frames in my leaf switches and PBS interfaces. In switches interfaces are in bond (lacp 2x10G) mode and mtu 9192. I created new vlan and subnet for PBS, but was forced to "comment" mtu 8000 jumbo parameters from PBS side and use default mtu 1500.
I'm not sure why I cannot use jumbos in PBS interface config?

1759215067578.png

Anyways, big thanks for the help! If jumbo frames cannot be used in PBS interfaces I'll manage with defaults.

BR
Jonas
 

Attachments

  • 1759209084232.png
    1759209084232.png
    46.8 KB · Views: 4
Glad I helped :) .

I would be surprised if in fact jumbo frames can't be used in PBS interfaces.

Have a look at a near thread: https://forum.proxmox.com/threads/mtu-size-requirements.172867/
Nicolas Frey wrote there:
the bridge (vmbr0) should inherit it's MTU size from the NIC, so you should actually be able to omit it in the configuration.

If I see it correctly, you could set MTU in ens... ifaces (or maybe in bond... iface?), not in vmbr... ifaces.
 
Last edited:
  • Like
Reactions: barlund