[SOLVED] Broken backup scheduler

Hi guys!
I've noticed a problem that tasks created in Proxmox VE 7.4 are not executed on schedule. We have moved the storage with backups with a change of ip address and re-mounted to the server (the mount point did not change) Since then the tasks are not executed on schedule, although the manual launch of the tasks is performed without errors.
UPD. At the moment I've started updating to PVE 7.4-13
UPD. some of backups execution schedule works fine. I tried expecting syslog but there is no information about any errors or something else. Tried ask in thread of PBS, but there is no answers

Code:
# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.102-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.3-3
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
ceph: 16.2.11-pve1
ceph-fuse: 16.2.11-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-3
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-1
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
openvswitch-switch: 2.15.0+ds1-2+deb11u2.1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.4
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-1
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.11-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

I checked syslog at the time when backup have to start, but there is no information.
Code:
Jun 13 14:42:04 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 13 14:45:02 bf01pve04 pvedaemon[3084903]: <root@pam> successful auth for user 'root@pam'
Jun 13 14:45:02 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 13 14:45:02 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 13 14:45:02 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 13 14:45:02 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 13 14:45:03 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 13 14:45:03 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 13 14:45:04 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 13 14:45:05 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 13 14:45:05 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 13 14:45:05 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 13 14:45:08 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 13 14:45:10 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 13 14:45:13 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 13 14:46:52 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 13 14:46:52 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 13 14:46:52 bf01pve04 systemd[1]: Created slice User Slice of UID 0.
Jun 13 14:46:52 bf01pve04 systemd[1]: Starting User Runtime Directory /run/user/0...
Jun 13 14:46:52 bf01pve04 systemd[1]: Finished User Runtime Directory /run/user/0.
Jun 13 14:46:52 bf01pve04 systemd[1]: Starting User Manager for UID 0...
Jun 13 14:46:52 bf01pve04 systemd[3130027]: Queued start job for default target Main User Target.
Jun 13 14:46:52 bf01pve04 systemd[3130027]: Created slice User Application Slice.
Jun 13 14:46:52 bf01pve04 systemd[3130027]: Reached target Paths.
Jun 13 14:46:52 bf01pve04 systemd[3130027]: Reached target Timers.
Jun 13 14:46:52 bf01pve04 systemd[3130027]: Starting D-Bus User Message Bus Socket.
Jun 13 14:46:52 bf01pve04 systemd[3130027]: Listening on GnuPG network certificate management daemon.
Jun 13 14:46:52 bf01pve04 systemd[3130027]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers).
Jun 13 14:46:52 bf01pve04 systemd[3130027]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
Jun 13 14:46:52 bf01pve04 systemd[3130027]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
Jun 13 14:46:52 bf01pve04 systemd[3130027]: Listening on GnuPG cryptographic agent and passphrase cache.
Jun 13 14:46:52 bf01pve04 systemd[3130027]: Listening on D-Bus User Message Bus Socket.
Jun 13 14:46:52 bf01pve04 systemd[3130027]: Reached target Sockets.
Jun 13 14:46:52 bf01pve04 systemd[3130027]: Reached target Basic System.
Jun 13 14:46:52 bf01pve04 systemd[3130027]: Reached target Main User Target.
Jun 13 14:46:52 bf01pve04 systemd[3130027]: Startup finished in 129ms.
Jun 13 14:46:52 bf01pve04 systemd[1]: Started User Manager for UID 0.
Jun 13 14:46:52 bf01pve04 systemd[1]: Started Session 2721 of user root.
 
Last edited:
Hi,
please share the output of the following
Code:
systemctl status pvescheduler.service
journalctl -b -u pvescheduler.service
cat /etc/pve/vzdump.cron
cat /etc/pve/jobs.cfg
and tell us which of the jobs are problematic.
 
I've tried to reproduce this on

Code:
proxmox-ve: 8.0.0 (running kernel: 6.2.16-1-pve)
pve-manager: 8.0.0~8 (running version: 8.0.0~8/708b186d18a76e28)

Backup Server 2.4-1

but the scheduled task runs successful again as expected.

I've changed the IP on PBS via WebUI (Configuration > Networkinterfaces) and on VE in /etc/pve/storage.cfg
 
Hi @fiona, Thank you for response.
Code:
# systemctl status pvescheduler.service
● pvescheduler.service - Proxmox VE scheduler
     Loaded: loaded (/lib/systemd/system/pvescheduler.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2023-03-27 18:28:37 +06; 2 months 22 days ago
    Process: 2084 ExecStart=/usr/bin/pvescheduler start (code=exited, status=0/SUCCESS)
   Main PID: 2085 (pvescheduler)
      Tasks: 3 (limit: 231991)
     Memory: 112.8M
        CPU: 9.772s
     CGroup: /system.slice/pvescheduler.service
             └─2085 pvescheduler

Mar 27 18:28:36 bf01pve04 systemd[1]: Starting Proxmox VE scheduler...
Mar 27 18:28:37 bf01pve04 pvescheduler[2085]: starting server
Mar 27 18:28:37 bf01pve04 systemd[1]: Started Proxmox VE scheduler.

Code:
# journalctl -b -u pvescheduler.service
-- Journal begins at Mon 2022-12-05 19:35:03 +06, ends at Mon 2023-06-19 09:25:06 +06. --
Mar 27 18:28:36 bf01pve04 systemd[1]: Starting Proxmox VE scheduler...
Mar 27 18:28:37 bf01pve04 pvescheduler[2085]: starting server
Mar 27 18:28:37 bf01pve04 systemd[1]: Started Proxmox VE scheduler.

Code:
# cat /etc/pve/vzdump.cron
# cluster wide vzdump cron schedule
# Automatically generated file - do not edit

PATH="/usr/sbin:/usr/bin:/sbin:/bin"

Code:
~# cat /etc/pve/jobs.cfg
vzdump: backup-5f23d4e5-5b70
        schedule sat 07:00
        compress zstd
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{guestname}}, {{vmid}}
        storage Proxmox_BS
        vmid 123,201

vzdump: backup-4093d60d-2555
        schedule 05:00
        compress zstd
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{guestname}}, {{vmid}}
        storage Proxmox_BS
        vmid 141,315

vzdump: backup-7c824c41-c56f
        schedule mon..fri 00:00
        compress zstd
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{guestname}}, {{vmid}}
        storage Proxmox_BS
        vmid 167,102

vzdump: backup-9ed1ae53-4e80
        schedule 01:30
        compress zstd
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{guestname}}, {{vmid}}
        storage Proxmox_BS
        vmid 159,160

vzdump: backup-84c4c1db-a841
        schedule sat 05:00
        compress zstd
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{guestname}}, {{vmid}}
        storage Proxmox_BS
        vmid 133,220

vzdump: backup-25a11158-71ef
        schedule mon 00:15
        compress zstd
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{guestname}}, {{vmid}}
        storage Proxmox_BS
        vmid 127,118

vzdump: backup-06faab24-c3b4
        schedule 21:30
        compress zstd
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{guestname}}, {{vmid}}
        storage Proxmox_BS
        vmid 191,192

vzdump: backup-7f09d26e-3dab
        schedule sat 03:30
        compress zstd
        enabled 1
        mailnotification always
        mailto pak.d@kazteleport.kz
        mode snapshot
        notes-template {{guestname}}, {{vmid}}
        storage Proxmox_BS
        vmid 178

vzdump: backup-90edda1f-cabc
        schedule mon..fri 23:00
        compress zstd
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{guestname}}, {{vmid}}
        storage Proxmox_BS
        vmid 144,145,200

vzdump: backup-6d944e85-9f36
        schedule sat 8:00
        compress zstd
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{guestname}}, {{vmid}}
        storage Proxmox_BS
        vmid 510,512,511

vzdump: backup-97601e29-b798
        schedule fri 23:00
        compress zstd
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{guestname}}, {{vmid}}
        storage Proxmox_BS
        vmid 115,153,501

vzdump: backup-1c562908-f401
        schedule sun 03:30
        compress zstd
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{guestname}}, {{vmid}}
        storage Proxmox_BS
        vmid 146

vzdump: backup-abe904f8-3ac2
        schedule sat 03:00
        compress zstd
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{guestname}}, {{vmid}}
        storage Proxmox_BS
        vmid 156

vzdump: backup-8e5e4a61-4336
        schedule mon..fri 05:30
        compress zstd
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{guestname}}, {{vmid}}
        storage Proxmox_BS
        vmid 134,135

vzdump: backup-b93fed57-a6d0
        schedule 22:00
        compress zstd
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{guestname}}, {{vmid}}
        storage Proxmox_BS
        vmid 179

vzdump: backup-e51ed79d-173a
        schedule 02:00
        compress zstd
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{guestname}}, {{vmid}}
        storage Proxmox_BS
        vmid 202

vzdump: backup-a82f9fc8-b5d6
        schedule 10:35
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{vmid}}, {{guestname}}
        storage Proxmox_BS
        vmid 101

As example we can start checking VMID 101

UPD. On some nodes in cluster I've found some errors
Code:
~# systemctl status pvescheduler.service
● pvescheduler.service - Proxmox VE scheduler
     Loaded: loaded (/lib/systemd/system/pvescheduler.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2023-06-15 14:37:11 +06; 3 days ago
    Process: 6081 ExecStart=/usr/bin/pvescheduler start (code=exited, status=0/SUCCESS)
   Main PID: 6082 (pvescheduler)
      Tasks: 1 (limit: 76761)
     Memory: 91.7M
        CPU: 3min 32.550s
     CGroup: /system.slice/pvescheduler.service
             └─6082 pvescheduler

Jun 15 14:37:10 220pve03 systemd[1]: Starting Proxmox VE scheduler...
Jun 15 14:37:11 220pve03 pvescheduler[6082]: starting server
Jun 15 14:37:11 220pve03 systemd[1]: Started Proxmox VE scheduler.
Jun 18 05:14:13 220pve03 pvescheduler[1766077]: jobs: cfs-lock 'file-jobs_cfg' error: got lock request timeout
 
Last edited:
I've tried to reproduce this on

Code:
proxmox-ve: 8.0.0 (running kernel: 6.2.16-1-pve)
pve-manager: 8.0.0~8 (running version: 8.0.0~8/708b186d18a76e28)

Backup Server 2.4-1

but the scheduled task runs successful again as expected.

I've changed the IP on PBS via WebUI (Configuration > Networkinterfaces) and on VE in /etc/pve/storage.cfg
As you said run succesfully again, that means your backup jobs didn't start?
 
Code:
vzdump: backup-a82f9fc8-b5d6
        schedule 10:35
        enabled 1
        mailnotification always
        mode snapshot
        notes-template {{vmid}}, {{guestname}}
        storage Proxmox_BS
        vmid 101

As example we can start checking VMID 101
Can you check the logs on the node with VM 101 and the output of ps aux | grep pvescheduler?

UPD. On some nodes in cluster I've found some errors
Code:
~# systemctl status pvescheduler.service
● pvescheduler.service - Proxmox VE scheduler
     Loaded: loaded (/lib/systemd/system/pvescheduler.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2023-06-15 14:37:11 +06; 3 days ago
    Process: 6081 ExecStart=/usr/bin/pvescheduler start (code=exited, status=0/SUCCESS)
   Main PID: 6082 (pvescheduler)
      Tasks: 1 (limit: 76761)
     Memory: 91.7M
        CPU: 3min 32.550s
     CGroup: /system.slice/pvescheduler.service
             └─6082 pvescheduler

Jun 15 14:37:10 220pve03 systemd[1]: Starting Proxmox VE scheduler...
Jun 15 14:37:11 220pve03 pvescheduler[6082]: starting server
Jun 15 14:37:11 ala220pve03 systemd[1]: Started Proxmox VE scheduler.
Jun 18 05:14:13 ala220pve03 pvescheduler[1766077]: jobs: cfs-lock 'file-jobs_cfg' error: got lock request timeout
Hmm, the host name suddenly changed from 220pve03 to ala220pve03? Is this node affected by the issue? What is the output of ps aux | grep pvescheduler there? Anything interesting in the system logs?
 
Can you check the logs on the node with VM 101 and the output of ps aux | grep pvescheduler?
Code:
~# ps aux | grep pvescheduler
root        2085  0.0  0.0 335548 109240 ?       Ss   Mar27   0:08 pvescheduler
root        2087  0.0  0.0      0     0 ?        Z    Mar27   0:00 [pvescheduler] <defunct>
root        2260  0.0  0.0      0     0 ?        Z    Mar27   0:00 [pvescheduler] <defunct>
root     2992232  0.0  0.0   6244   712 pts/0    S+   15:40   0:00 grep pvescheduler

VM 101 was scheduled to backup at 10:35, but in syslog I found nothing.
Code:
Jun 19 10:34:17 bf01pve04 pveproxy[2649944]: worker exit
Jun 19 10:34:17 bf01pve04 pveproxy[2663543]: Clearing outdated entries from certificate cache
Jun 19 10:34:17 bf01pve04 pveproxy[2070]: worker 2649944 finished
Jun 19 10:34:17 bf01pve04 pveproxy[2070]: starting 1 worker(s)
Jun 19 10:34:17 bf01pve04 pveproxy[2070]: worker 2663697 started
Jun 19 10:35:02 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:35:02 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:35:02 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:35:02 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:35:02 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:35:03 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:35:03 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:35:04 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:35:04 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:35:05 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:35:05 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:35:08 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:35:10 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:35:13 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:35:30 bf01pve04 pveproxy[2630210]: worker exit
Jun 19 10:35:30 bf01pve04 pveproxy[2070]: worker 2630210 finished
Jun 19 10:35:30 bf01pve04 pveproxy[2070]: starting 1 worker(s)
Jun 19 10:35:30 bf01pve04 pveproxy[2070]: worker 2665193 started
Jun 19 10:35:41 bf01pve04 pveproxy[2663697]: Clearing outdated entries from certificate cache
Jun 19 10:36:03 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:36:03 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:36:03 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:36:03 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:36:03 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:36:04 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:36:04 bf01pve04 pmxcfs[1429]: [status] notice: received log
Jun 19 10:36:49 bf01pve04 pvedaemon[2470017]: <root@pam> successful auth for user 'root@pam'
Jun 19 10:36:50 bf01pve04 pvedaemon[2659278]: <root@pam> successful auth for user 'root@pam'
Jun 19 10:37:23 bf01pve04 pveproxy[2665193]: Clearing outdated entries from certificate cache
Jun 19 10:39:02 bf01pve04 pmxcfs[1429]: [status] notice: received log

Hmm, the host name suddenly changed from 220pve03 to ala220pve03?
No. it is my mistake (I wanted to change the host name on forum for security reason) I've never changed hostname in cluster.
 
As you said run succesfully again, that means your backup jobs didn't start?
They started and were successful, so no issues arose. Anyway, I've used a different and beta version for testing, sorry.
 
Code:
~# ps aux | grep pvescheduler
root        2085  0.0  0.0 335548 109240 ?       Ss   Mar27   0:08 pvescheduler
root        2087  0.0  0.0      0     0 ?        Z    Mar27   0:00 [pvescheduler] <defunct>
root        2260  0.0  0.0      0     0 ?        Z    Mar27   0:00 [pvescheduler] <defunct>
root     2992232  0.0  0.0   6244   712 pts/0    S+   15:40   0:00 grep pvescheduler
Well, there seems to be left-over zombie processes, which shouldn't happen. Can you try systemctl reload-or-restart pvescheduler.service and see if they go away?
 
What if you explicitly stop and start the service? Maybe the output of ps faxl contains another hint about the zombie processes? Otherwise, I'd suggest rebooting the host.
 
Otherwise, I'd suggest rebooting the host.
Finally I've decided that updating with rebooting hosts will be better. So, many VM's started backuping (huray!)
But, at the moment I can see interesting bug\problem. I'll try to explain, VMID 202 has schedule to backup everyday, backups done succesfully, but I can see nothing on PBS side.
Code:
Jun 20 02:00:07 ala220pve01 pvescheduler[3410472]: <root@pam> starting task UPID:ala220pve01:00340A29:025AD97E:6490B3C7:vzdump:202:root@pam:
Jun 20 02:00:07 ala220pve01 pvescheduler[3410473]: INFO: starting new backup job: vzdump 202 --mode snapshot --notes-template '{{guestname}}, {{vmid}}' --quiet
 1 --storage Proxmox_BS --mailnotification failure --compress zstd
Jun 20 02:00:07 ala220pve01 pvescheduler[3410473]: INFO: Starting Backup of VM 202 (qemu)
Jun 20 09:30:09 ala220pve01 pvescheduler[3410473]: INFO: Finished Backup of VM 202 (07:30:02)
Jun 20 09:30:09 ala220pve01 pvescheduler[3410473]: INFO: Backup job finished successfully
Jun 21 02:00:00 ala220pve01 pvescheduler[4112092]: <root@pam> starting task UPID:ala220pve01:003EBEDD:02DEB18B:64920540:vzdump:202:root@pam:
Jun 21 02:00:00 ala220pve01 pvescheduler[4112093]: INFO: starting new backup job: vzdump 202 --mode snapshot --notes-template '{{guestname}}, {{vmid}}' --quiet
 1 --storage Proxmox_BS --mailnotification failure --compress zstd
Jun 21 02:00:00 ala220pve01 pvescheduler[4112093]: INFO: Starting Backup of VM 202 (qemu)
Jun 21 11:13:42 ala220pve01 pvescheduler[4112093]: INFO: Finished Backup of VM 202 (09:13:42)
Jun 21 11:13:42 ala220pve01 pvescheduler[4112093]: INFO: Backup job finished successfully
On PBS side I can see backup of VMID 202 while backuping VM. But after "reload" command on datastore backup disappears from the list.
1687345984468.png
 
Finally I've decided that updating with rebooting hosts will be better. So, many VM's started backuping (huray!)
But, at the moment I can see interesting bug\problem. I'll try to explain, VMID 202 has schedule to backup everyday, backups done succesfully, but I can see nothing on PBS side.
Glad to hear :)
Code:
Jun 20 02:00:07 ala220pve01 pvescheduler[3410472]: <root@pam> starting task UPID:ala220pve01:00340A29:025AD97E:6490B3C7:vzdump:202:root@pam:
Jun 20 02:00:07 ala220pve01 pvescheduler[3410473]: INFO: starting new backup job: vzdump 202 --mode snapshot --notes-template '{{guestname}}, {{vmid}}' --quiet
 1 --storage Proxmox_BS --mailnotification failure --compress zstd
Jun 20 02:00:07 ala220pve01 pvescheduler[3410473]: INFO: Starting Backup of VM 202 (qemu)
Jun 20 09:30:09 ala220pve01 pvescheduler[3410473]: INFO: Finished Backup of VM 202 (07:30:02)
Jun 20 09:30:09 ala220pve01 pvescheduler[3410473]: INFO: Backup job finished successfully
Jun 21 02:00:00 ala220pve01 pvescheduler[4112092]: <root@pam> starting task UPID:ala220pve01:003EBEDD:02DEB18B:64920540:vzdump:202:root@pam:
Jun 21 02:00:00 ala220pve01 pvescheduler[4112093]: INFO: starting new backup job: vzdump 202 --mode snapshot --notes-template '{{guestname}}, {{vmid}}' --quiet
 1 --storage Proxmox_BS --mailnotification failure --compress zstd
Jun 21 02:00:00 ala220pve01 pvescheduler[4112093]: INFO: Starting Backup of VM 202 (qemu)
Jun 21 11:13:42 ala220pve01 pvescheduler[4112093]: INFO: Finished Backup of VM 202 (09:13:42)
Jun 21 11:13:42 ala220pve01 pvescheduler[4112093]: INFO: Backup job finished successfully
On PBS side I can see backup of VMID 202 while backuping VM. But after "reload" command on datastore backup disappears from the list.
View attachment 51948
Please post the full backup task logs from the PVE side and PBS side for this job.
 
Your backup is also not done incrementally. When adding the storage on PVE via GUI, are you sure that you selected the last item in the list 'Proxmox Backup Server'?
 
Please post the full backup task logs from the PVE side and PBS side for this job.

Code:
-- Journal begins at Fri 2023-02-03 20:21:03 +06, ends at Thu 2023-06-22 09:27:04 +06. --
Jun 15 12:18:45 220pve01 systemd[1]: Starting Proxmox VE scheduler...
Jun 15 12:18:46 220pve01 pvescheduler[4842]: starting server
Jun 15 12:18:46 220pve01 systemd[1]: Started Proxmox VE scheduler.
Jun 15 13:02:09 220pve01 pvescheduler[32658]: jobs: cfs-lock 'file-jobs_cfg' error: got lock request timeout
Jun 15 13:02:09 220pve01 pvescheduler[32657]: replication: cfs-lock 'file-replication_cfg' error: got lock request timeout
Jun 16 00:00:05 220pve01 pvescheduler[394404]: <root@pam> starting task UPID:220pve01:000604A5:0040721B:648B51A5:vzdump:102:root@pam:
Jun 16 00:00:05 220pve01 pvescheduler[394405]: INFO: starting new backup job: vzdump 167 102 --compress zstd --mailnotification always --storage Proxmox_BS --mode snapshot --notes-template '{{guestname}}, {{vmid}}' --quiet 1
Jun 16 00:00:05 220pve01 pvescheduler[394405]: INFO: Starting Backup of VM 102 (qemu)
Jun 16 00:02:23 220pve01 pvescheduler[394405]: INFO: Finished Backup of VM 102 (00:02:18)
Jun 16 00:02:23 220pve01 pvescheduler[394405]: INFO: Backup job finished successfully
Jun 16 02:00:06 220pve01 pvescheduler[439299]: <root@pam> starting task UPID:220pve01:0006B404:004B6F7A:648B6DC6:vzdump:202:root@pam:
Jun 16 02:00:07 220pve01 pvescheduler[439300]: INFO: starting new backup job: vzdump 202 --storage Proxmox_BS --mode snapshot --notes-template '{{guestname}}, {{vmid}}' --quiet 1 --compress zstd --mailnotification always
Jun 16 02:00:07 220pve01 pvescheduler[439300]: INFO: Starting Backup of VM 202 (qemu)
Jun 16 10:53:15 220pve01 pvescheduler[439300]: INFO: Finished Backup of VM 202 (08:53:08)
Jun 16 10:53:15 220pve01 pvescheduler[439300]: INFO: Backup job finished successfully
Jun 16 23:00:00 220pve01 pvescheduler[959896]: <root@pam> starting task UPID:220pve01:000EA599:00BECC0F:648C9510:vzdump:153:root@pam:
Jun 16 23:00:00 220pve01 pvescheduler[959897]: INFO: starting new backup job: vzdump 115 153 501 --storage Proxmox_BS --quiet 1 --notes-template '{{guestname}}, {{vmid}}' --mode snapshot --compress zstd --mailnotification always
Jun 16 23:00:00 220pve01 pvescheduler[959897]: INFO: Starting Backup of VM 153 (qemu)
Jun 16 23:01:06 220pve01 pvescheduler[959897]: INFO: Finished Backup of VM 153 (00:01:06)
Jun 16 23:01:06 220pve01 pvescheduler[959897]: INFO: Backup job finished successfully
Jun 17 02:00:02 220pve01 pvescheduler[1040860]: <root@pam> starting task UPID:220pve01:000FE1DD:00CF4816:648CBF42:vzdump:202:root@pam:
Jun 17 02:00:02 220pve01 pvescheduler[1040861]: INFO: starting new backup job: vzdump 202 --mode snapshot --notes-template '{{guestname}}, {{vmid}}' --quiet 1 --storage Proxmox_BS --mailnotification always --compress zstd
Jun 17 02:00:02 220pve01 pvescheduler[1040861]: INFO: Starting Backup of VM 202 (qemu)
Jun 17 11:27:28 220pve01 pvescheduler[1040861]: INFO: Finished Backup of VM 202 (09:27:26)
Jun 17 11:27:28 220pve01 pvescheduler[1040861]: INFO: Backup job finished successfully
Jun 18 02:00:08 220pve01 pvescheduler[1782026]: <root@pam> starting task UPID:220pve01:001B310B:015324D2:648E10C8:vzdump:202:root@pam:
Jun 18 02:00:08 220pve01 pvescheduler[1782027]: INFO: starting new backup job: vzdump 202 --storage Proxmox_BS --mode snapshot --notes-template '{{guestname}}, {{vmid}}' --quiet 1 --compress zstd --mailnotification always
Jun 18 02:00:08 220pve01 pvescheduler[1782027]: INFO: Starting Backup of VM 202 (qemu)
Jun 18 09:13:03 220pve01 pvescheduler[1782027]: INFO: Finished Backup of VM 202 (07:12:55)
Jun 18 09:13:03 220pve01 pvescheduler[1782027]: INFO: Backup job finished successfully
Jun 19 00:00:00 220pve01 pvescheduler[2524464]: <root@pam> starting task UPID:220pve01:00268531:01CBFF12:648F4620:vzdump:102:root@pam:
Jun 19 00:00:00 220pve01 pvescheduler[2524465]: INFO: starting new backup job: vzdump 167 102 --storage Proxmox_BS --mode snapshot --notes-template '{{guestname}}, {{vmid}}' --quiet 1 --compress zstd --mailnotification always
Jun 19 00:00:00 220pve01 pvescheduler[2524465]: INFO: Starting Backup of VM 102 (qemu)
Jun 19 00:10:13 220pve01 pvescheduler[2524465]: INFO: Finished Backup of VM 102 (00:10:13)
Jun 19 00:10:13 220pve01 pvescheduler[2524465]: INFO: Backup job finished successfully
Jun 19 02:00:01 220pve01 pvescheduler[2588939]: <root@pam> starting task UPID:220pve01:0027810C:01D6FC71:648F6241:vzdump:202:root@pam:
Jun 19 02:00:01 220pve01 pvescheduler[2588940]: INFO: starting new backup job: vzdump 202 --compress zstd --mailnotification always --storage Proxmox_BS --quiet 1 --notes-template '{{guestname}}, {{vmid}}' --mode snapshot
Jun 19 02:00:01 220pve01 pvescheduler[2588940]: INFO: Starting Backup of VM 202 (qemu)
Jun 19 09:13:30 220pve01 pvescheduler[2588940]: INFO: Finished Backup of VM 202 (07:13:29)
Jun 19 09:13:30 220pve01 pvescheduler[2588940]: INFO: Backup job finished successfully
Jun 20 00:05:05 220pve01 pvescheduler[3350956]: <root@pam> starting task UPID:220pve01:003321AE:02505152:649098D1:vzdump:102:root@pam:
Jun 20 00:05:05 220pve01 pvescheduler[3350958]: INFO: starting new backup job: vzdump 167 102 --storage Proxmox_BS --quiet 1 --notes-template '{{guestname}}, {{vmid}}' --mode snapshot --compress zstd --mailnotification failure
Jun 20 00:05:05 220pve01 pvescheduler[3350958]: INFO: Starting Backup of VM 102 (qemu)
Jun 20 00:06:13 220pve01 pvescheduler[3350958]: INFO: Finished Backup of VM 102 (00:01:08)
Jun 20 00:06:13 220pve01 pvescheduler[3350958]: INFO: Backup job finished successfully
Jun 20 02:00:07 220pve01 pvescheduler[3410472]: <root@pam> starting task UPID:220pve01:00340A29:025AD97E:6490B3C7:vzdump:202:root@pam:
Jun 20 02:00:07 220pve01 pvescheduler[3410473]: INFO: starting new backup job: vzdump 202 --mode snapshot --notes-template '{{guestname}}, {{vmid}}' --quiet 1 --storage Proxmox_BS --mailnotification failure --compress zstd
Jun 20 02:00:07 220pve01 pvescheduler[3410473]: INFO: Starting Backup of VM 202 (qemu)
Jun 20 09:30:09 220pve01 pvescheduler[3410473]: INFO: Finished Backup of VM 202 (07:30:02)
Jun 20 09:30:09 220pve01 pvescheduler[3410473]: INFO: Backup job finished successfully
Jun 21 00:05:10 220pve01 pvescheduler[4064075]: <root@pam> starting task UPID:220pve01:003E034C:02D42D13:6491EA56:vzdump:102:root@pam:
Jun 21 00:05:10 220pve01 pvescheduler[4064076]: INFO: starting new backup job: vzdump 167 102 171 --compress zstd --mailnotification failure --storage Proxmox_BS --quiet 1 --notes-template '{{guestname}}, {{vmid}}' --mode snapshot
Jun 21 00:05:10 220pve01 pvescheduler[4064076]: INFO: Starting Backup of VM 102 (qemu)
Jun 21 00:08:45 220pve01 pvescheduler[4064076]: INFO: Finished Backup of VM 102 (00:03:35)
Jun 21 00:08:45 220pve01 pvescheduler[4064076]: INFO: Backup job finished successfully
Jun 21 02:00:00 220pve01 pvescheduler[4112092]: <root@pam> starting task UPID:220pve01:003EBEDD:02DEB18B:64920540:vzdump:202:root@pam:
Jun 21 02:00:00 220pve01 pvescheduler[4112093]: INFO: starting new backup job: vzdump 202 --mode snapshot --notes-template '{{guestname}}, {{vmid}}' --quiet 1 --storage Proxmox_BS --mailnotification failure --compress zstd
Jun 21 02:00:00 220pve01 pvescheduler[4112093]: INFO: Starting Backup of VM 202 (qemu)
Jun 21 11:13:42 220pve01 pvescheduler[4112093]: INFO: Finished Backup of VM 202 (09:13:42)
Jun 21 11:13:42 220pve01 pvescheduler[4112093]: INFO: Backup job finished successfully
Jun 21 21:47:10 220pve01 pvescheduler[403879]: replication: cfs-lock 'file-replication_cfg' error: got lock request timeout
Jun 22 00:05:03 220pve01 pvescheduler[459745]: <root@pam> starting task UPID:220pve01:000703E2:035805E2:64933BCF:vzdump:102:root@pam:
Jun 22 00:05:03 220pve01 pvescheduler[459746]: INFO: starting new backup job: vzdump 167 102 171 --storage Proxmox_BS --quiet 1 --notes-template '{{guestname}}, {{vmid}}' --mode snapshot --compress zstd --mailnotification failure
Jun 22 00:05:03 220pve01 pvescheduler[459746]: INFO: Starting Backup of VM 102 (qemu)
Jun 22 00:07:04 220pve01 pvescheduler[459746]: INFO: Finished Backup of VM 102 (00:02:01)
Jun 22 00:07:04 220pve01 pvescheduler[459746]: INFO: Backup job finished successfully
Jun 22 02:00:03 220pve01 pvescheduler[507422]: <root@pam> starting task UPID:220pve01:0007BE1F:03628D00:649356C3:vzdump:202:root@pam:
Jun 22 02:00:03 220pve01 pvescheduler[507423]: INFO: starting new backup job: vzdump 202 --storage Proxmox_BS --quiet 1 --notes-template '{{guestname}}, {{vmid}}' --mode snapshot --compress zstd --mailnotification failure
Jun 22 02:00:03 220pve01 pvescheduler[507423]: INFO: Starting Backup of VM 202 (qemu)
Code:
Jun 22 09:25:29 pbs01 systemd[1]: session-221.scope: Succeeded.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit session-221.scope has successfully entered the 'dead' state.
Jun 22 09:25:29 pbs01 systemd-logind[526]: Removed session 221.
░░ Subject: Session 221 has been terminated
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ Documentation: sd-login(3)
░░
░░ A session with the ID 221 has been terminated.
Jun 22 09:25:30 pbs01 proxmox-backup-proxy[697]: error during snapshot file listing: 'unable to load blob '"/var/backups/vm/315/2023-06-14T03:30:27Z/index.json.blob"' - No such file or direc
tory (os error 2)'
Jun 22 09:25:30 pbs01 proxmox-backup-proxy[697]: error during snapshot file listing: 'unable to load blob '"/var/backups/vm/202/2023-06-21T20:00:03Z/index.json.blob"' - No such file or direc
tory (os error 2)'
Jun 22 09:25:42 pbs01 login[280992]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Jun 22 09:25:42 pbs01 systemd-logind[526]: New session 222 of user root.
░░ Subject: A new session 222 has been created for user root
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ Documentation: sd-login(3)
░░
░░ A new session with the ID 222 has been created for the user root.
░░
░░ The leading process of the session is 280992.
Jun 22 09:25:42 pbs01 systemd[1]: Started Session 222 of user root.
░░ Subject: A start job for unit session-222.scope has finished successfully
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit session-222.scope has finished successfully.
░░
░░ The job identifier is 4905
Jun 22 09:25:42 pbs01 login[280997]: ROOT LOGIN  on '/dev/pts/0'
Jun 22 09:35:02 pbs01 proxmox-backup-proxy[697]: error during snapshot file listing: 'unable to load blob '"/var/backups/vm/315/2023-06-14T03:30:27Z/index.json.blob"' - No such file or direc
tory (os error 2)'
Jun 22 09:35:02 pbs01 proxmox-backup-proxy[697]: error during snapshot file listing: 'unable to load blob '"/var/backups/vm/202/2023-06-21T20:00:03Z/index.json.blob"' - No such file or direc
tory (os error 2)'
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Upload statistics for 'drive-sata2.img.fidx'
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: UUID: a8fe45691beb4c7fa2b3dc5ca256a21a
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Checksum: ac011bf703a0f9e275c8e106ff8be9c3d9ec986fc771319b2bc2271eca1f3b19
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Size: 1073741824000
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Chunk count: 256000
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Upload size: 1073448222720 (99%)
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Duplicates: 70+253119 (98%)
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Compression: 43%
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: successfully closed fixed index 3
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Upload statistics for 'drive-sata1.img.fidx'
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: UUID: e382aacdba76419a9e0e9e1fa72428d3
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Checksum: e1dc5a2ce5107e9fe4b3b7ceb5812f6966f17408da3187d46320002f29469a2d
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Size: 1771674009600
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Chunk count: 422400
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Upload size: 1758604558336 (99%)
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Duplicates: 3116+411281 (98%)
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Compression: 55%
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: successfully closed fixed index 2
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Upload statistics for 'drive-sata0.img.fidx'
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: UUID: 9a10ebb647da432c83009aa9ffe7b6b6
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Checksum: ab193b3cfe51b275df5bc7107a8bbb29dbea64d209fddc9edae5f54775c15e12
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Size: 268435456000
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Chunk count: 64000
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Upload size: 268099911680 (99%)
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Duplicates: 80+47396 (74%)
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: Compression: 19%
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: successfully closed fixed index 1
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: add blob "/var/backups/vm/202/2023-06-21T20:00:03Z/index.json.blob" (431 bytes, comp: 431)
Jun 22 09:35:48 pbs01 proxmox-backup-[697]: pbs01 proxmox-backup-proxy[697]: syncing filesystem
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: successfully finished backup
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: backup finished successfully
Jun 22 09:35:48 pbs01 proxmox-backup-proxy[697]: TASK OK
Jun 22 09:35:49 pbs01 proxmox-backup-proxy[697]: Upload backup log to datastore 'PVE_NFS', root namespace vm/202/2023-06-21T20:00:03Z/client.log.blob
Jun 22 09:37:47 pbs01 postfix/qmgr[803]: BBCBF160F41: from=<root@pbs01.kazteleport.kz>, size=3505, nrcpt=1 (queue active)
Jun 22 09:38:17 pbs01 postfix/smtp[281024]: connect to mail.kazteleport.kz[91.216.178.12]:25: Connection timed out
Jun 22 09:38:17 pbs01 postfix/smtp[281024]: BBCBF160F41: to=<sysadmin@kazteleport.kz>, relay=none, delay=376557, delays=376526/0.03/30/0, dsn=4.4.1, status=deferred (connect to mail.kaztelep
ort.kz[91.216.178.12]:25: Connection timed out)
Jun 22 09:40:02 pbs01 proxmox-backup-proxy[697]: error during snapshot file listing: 'unable to load blob '"/var/backups/vm/315/2023-06-14T03:30:27Z/index.json.blob"' - No such file or direc
tory (os error 2)'
Jun 22 09:45:02 pbs01 proxmox-backup-proxy[697]: error during snapshot file listing: 'unable to load blob '"/var/backups/vm/315/2023-06-14T03:30:27Z/index.json.blob"' - No such file or direc
tory (os error 2)'
Jun 22 09:50:02 pbs01 proxmox-backup-proxy[697]: error during snapshot file listing: 'unable to load blob '"/var/backups/vm/315/2023-06-14T03:30:27Z/index.json.blob"' - No such file or direc
tory (os error 2)'
Jun 22 09:52:55 pbs01 proxmox-backup-[697]: pbs01 proxmox-backup-proxy[697]: write rrd data back to disk
Jun 22 09:52:55 pbs01 proxmox-backup-[697]: pbs01 proxmox-backup-proxy[697]: starting rrd data sync
Jun 22 09:52:55 pbs01 proxmox-backup-[697]: pbs01 proxmox-backup-proxy[697]: rrd journal successfully committed (20 files in 0.175 seconds)
Jun 22 09:55:03 pbs01 proxmox-backup-proxy[697]: error during snapshot file listing: 'unable to load blob '"/var/backups/vm/315/2023-06-14T03:30:27Z/index.json.blob"' - No such file or direc
tory (os error 2)'
 
The task logs can be found the UI. Bottom panel or select your node and then Task History for Proxmox VE and in the upper right corner for Proxmox Backup Server.
 
The task logs can be found the UI. Bottom panel or select your node and then Task History for Proxmox VE and in the upper right corner for Proxmox Backup Server.
Thank you @fiona for your patience and help to me
There I found after backup task, next task named "Erase Data" which delete backup
1687491597614.png
As you can see VMID 134 Backup job done
Next task you can see
1687491654473.png
And this happens with lot of VM's
From PBS everythink OK (target Datastore PVE_NFS)
1687491709974.png
 
You need to find out who triggers that erase task. Do you have any kind of (cron/systemd-triggered) script that could be at fault? If it's triggered from the API from outside, /var/log/pveproxy/access.log on the Proxmox VE node could also give a hint.
 
You need to find out who triggers that erase task. Do you have any kind of (cron/systemd-triggered) script that could be at fault? If it's triggered from the API from outside, /var/log/pveproxy/access.log on the Proxmox VE node could also give a hint.
Thank you @fiona! We've found that HostBill system erase backups if clients doesn't have permissions that allows to backup VM, and doesn't matter manual backup started or by scheduler on PVE side. All problems resolved
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!