veeam px worker connect to px node

SZ-PX

New Member
Jun 30, 2025
2
0
1
poc running px 8.4.1 and veeam 12.3.2.3617

error:
1490] ERROR | [ProxmoxRemoteAgent][ProxmoxRestClient]: <== Request "Get" "https://10.30.78.36:8006/api2/json/nodes", body: "{}"
2025-06-25 07:43:26.2738 00003 [1490] ERROR | [ProxmoxRemoteAgent][ProxmoxRestClient]: ==> Response "Get" "https://10.30.78.36:8006/api2/json/nodes", "status: Error", duration: "2 min 8 sec 467 msec", body: """"

Veeam support claimed this is a error from px node, port 8006 is open from worker to px.

Any idea for a new approch for throubleshooting?

Regards

Peter
 
Hi @SZ-PX, welcome to the forum!

It's important to note that only Veeam can definitively explain what that specific error means. To an outside observer, it appears that a GET request took 2 minutes and 8 seconds, which is extraordinarily long, especially considering most GET requests complete in under a second.

Also, based on the context, this message likely reflects communication between the Veeam Remote Agent and Proxmox VE, not directly between the Veeam Server and PVE. This aligns with your statement that communication between the worker and PVE is allowed.

Since you're looking to troubleshoot, here are a few steps you can take:

- Spin up a Linux VM on the same network where the worker is deployed. Use curl to send API requests to the PVE node to confirm connectivity and response time.
- Ask Veeam where the detailed logs of Veeam–PVE communication are stored. If memory serves, they are pulled from the worker and saved somewhere on the Windows host.
- Run tcpdump on the bridge interface that the worker VM is attached to. This can help you observe the traffic and see where delays might be occurring.


As a baseline check:
  • If you can access the PVE web GUI, that generally confirms the API is responsive.
  • Try running this on your PVE host:
    time pvesh get /nodes
    This should complete in less than a second. If it doesn’t, you may be dealing with performance or configuration issues on the host itself.
If that test passes, it’s very likely the problem lies somewhere in your network path—possibly an MTU mismatch, packet loss, or firewall interference.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: LnxBil
now we get the answer from veeam:

Thank you for your updates.
I've read the reply from Proxmox, but I don't think they've understood the issue.
The problem is that the "status: Error" is the actual reply from the Proxmox cluster and this is what we need them to explain:
Why is the Proxmox cluster responding with "status: Error".
It's not something we from Veeam can explain, but it's something Proxmox should explain, as that reply comes from the cluster.
Let me know if you have any questions or concerns.
Thanks in advance.


are there any log files in the worker? ist it possible to login and run tcpdump in the worker?

Regads

Peter
 
Hi @SZ-PX ,

I understand what Veeam team is trying to say. They clarified the meaning of the text produced by their application, which is appreciated.

In most cases the Web Application (PVE API) does not just return text "Error" or "status: Error". There is a numerical exit code associated with it, for example "401" (authorization denied), 500 (internal server error), 503, etc.

Lets assume for the time being that this is indeed a PVE error. Are you experiencing any issues with any other aspect of PVE operations? Have you done any troubleshooting steps advised earlier? Have you confirmed API operations with a non-worker host?

are there any log files in the worker? ist it possible to login and run tcpdump in the worker?
Worker is part of the Veeam product, you should address this to their support.


Finally, perform these steps:
- Open your browser
- go to "https://10.30.78.36:8006
- authenticate
- paste https://10.30.78.36:8006/api2/json/nodes in the address bar, press enter

Do you get more than "{}" ? If the reply is a seemingly good JSON, then either your Veeam is misconfigured, or you have a network problem.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: gfngfn256