Hi,
I've got a weird behavior from my PBSes that I don't understand during sync jobs.
I have 2 LANs with a Wireguard VPN in between.
In each LAN, I have a NAS running on OpenMediaVault (OMV) with PBS installed.
10.0.1.1/16 <----- Wireguard -----> 10.1.1.1/16
I have a sync job setup on 10.1.1.1 that fetches the backups on 10.0.1.1 and sync them locally, each day at 5am and 5pm.
The scheduled, automatic sync job fails with the following error message:
Synchronization failed: remote connection to '10.0.1.1' failed - error trying to connect: error connecting to href="https://10.0.1.1:8007/">https://10.0.1.1:8007/ - tcp connect error: deadline has elapsed
However, when I manually "Run now" the same jobs from the webUI, no problem. Even if it's at the exact same time as the schedule should have run it.
The network trafic from 10.1.0.0/16 to 10.0.0.0/16 is filtered, I only allow what I need.
10.1.1.1 can ping 10.0.1.1.
10.1.1.1 can "curl -s https://10.0.1.1:8007"
I have the same behavior wether I give "10.0.1.1" as remote host or the FQDN, that can also resolve with its IPv6. Same issue in both cases.
Any idea?
Do I need to open something between 10.1.1.1 and 10.0.1.1? Any other protocol or port required for the schedule and not used when clicking on "Run now"?
Thanks in advance!
I've got a weird behavior from my PBSes that I don't understand during sync jobs.
I have 2 LANs with a Wireguard VPN in between.
In each LAN, I have a NAS running on OpenMediaVault (OMV) with PBS installed.
10.0.1.1/16 <----- Wireguard -----> 10.1.1.1/16
I have a sync job setup on 10.1.1.1 that fetches the backups on 10.0.1.1 and sync them locally, each day at 5am and 5pm.
The scheduled, automatic sync job fails with the following error message:
Synchronization failed: remote connection to '10.0.1.1' failed - error trying to connect: error connecting to href="https://10.0.1.1:8007/">https://10.0.1.1:8007/ - tcp connect error: deadline has elapsed
However, when I manually "Run now" the same jobs from the webUI, no problem. Even if it's at the exact same time as the schedule should have run it.
The network trafic from 10.1.0.0/16 to 10.0.0.0/16 is filtered, I only allow what I need.
10.1.1.1 can ping 10.0.1.1.
10.1.1.1 can "curl -s https://10.0.1.1:8007"
I have the same behavior wether I give "10.0.1.1" as remote host or the FQDN, that can also resolve with its IPv6. Same issue in both cases.
Any idea?
Do I need to open something between 10.1.1.1 and 10.0.1.1? Any other protocol or port required for the schedule and not used when clicking on "Run now"?
Thanks in advance!
Last edited: