No previous backup found, cannot do incremental backup

We don't accept payment for specific issues. We did just announce the first stable version of PBS, and support subscriptions are now available via our website, but that is unrelated to specific issues you want fixed.

Since I can't reproduce it here, I'm still assuming something is wrong with your setup - either software or hardware. Have you tried on different hardware? Maybe try installing from our ISO installers as well, just for testing?
 
Hi @Stefan_R

Sorry for the long delay, I just hadnt enough time to debug more, until now. What I tried: Installed proxmox backup server from your release iso, on the same proxmox standalone host which I want to backup vms with.

Added a data storage (/backup/nodename) and connected the proxmox server with it - still having the same issue after the second backup - first one goes trough properly, second one stuck with the same error message:

Code:
INFO: starting new backup job: vzdump 101 --remove 0 --mode snapshot --storage pb102 --node pm104
INFO: Starting Backup of VM 101 (qemu)
INFO: Backup started at 2020-11-28 13:47:29
INFO: status = running
INFO: VM Name: pbx101.xxx.xx
INFO: include disk 'scsi0' 'data:vm-101-disk-0' 160G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/101/2020-11-28T12:47:29Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '7d9151f7-6af6-4de3-8ed0-e7015bfc38a8'
INFO: resuming VM again
INFO: scsi0: dirty-bitmap status: created new
INFO:   0% (808.0 MiB of 160.0 GiB) in  3s, read: 269.3 MiB/s, write: 228.0 MiB/s
INFO:   1% (1.6 GiB of 160.0 GiB) in  6s, read: 277.3 MiB/s, write: 277.3 MiB/s
INFO:   2% (3.6 GiB of 160.0 GiB) in 14s, read: 257.5 MiB/s, write: 240.0 MiB/s
...
INFO:  88% (141.4 GiB of 160.0 GiB) in 53s, read: 4.4 GiB/s, write: 13.3 MiB/s
INFO:  96% (154.6 GiB of 160.0 GiB) in 56s, read: 4.4 GiB/s, write: 5.3 MiB/s
INFO: 100% (160.0 GiB of 160.0 GiB) in 57s, read: 5.4 GiB/s, write: 0 B/s
INFO: backup is sparse: 151.34 GiB (94%) total zero data
INFO: backup was done incrementally, reused 151.54 GiB (94%)
INFO: transferred 160.00 GiB in 57 seconds (2.8 GiB/s)
INFO: Finished Backup of VM 101 (00:00:57)
INFO: Backup finished at 2020-11-28 13:48:26
INFO: Backup job finished successfully
TASK OK

Code:
INFO: starting new backup job: vzdump 101 --node pm104 --storage pb102 --mode snapshot --remove 0
INFO: Starting Backup of VM 101 (qemu)
INFO: Backup started at 2020-11-28 13:49:25
INFO: status = running
INFO: VM Name: pbx101.xxx.xx
INFO: include disk 'scsi0' 'data:vm-101-disk-0' 160G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/101/2020-11-28T12:49:25Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
ERROR: VM 101 qmp command 'backup' failed - backup register image failed: command error: no previous backup found, cannot do incremental backup
INFO: aborting backup job
ERROR: Backup of VM 101 failed - VM 101 qmp command 'backup' failed - backup register image failed: command error: no previous backup found, cannot do incremental backup
INFO: Failed at 2020-11-28 13:49:26
INFO: Backup job finished with errors
TASK ERROR: job errors

Both machines, proxmox backup server and standalone host, are up to date, freshly patched and rebooted:
Code:
root@pm104 ~ # pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.73-1-pve)
pve-manager: 6.3-2 (running version: 6.3-2/22f57405)
pve-kernel-5.4: 6.3-1
pve-kernel-helper: 6.3-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-6
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.3-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-1
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-1
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1

Code:
root@pb102:~# proxmox-backup-manager versions
proxmox-backup-server 1.0.5-1 running version: 1.0.5

So probaly the proxmox standalone nodes are the issue - but there I can't make anything else: A hetzner ax41 server, installimage with debian10 and install proxmox according to the manual installation steps routine (as I already wrote above).

Do you maybe have any additional idea?
 
what was the last time you shut down the vm ? maybe it still has an older version of the library loaded?
 
Hi @dcsapak ~3 minutes before starting the first backup :).

I ordered an additional server on hetzner, I'll try with a full iso proxmox installation to verify, if the issue only occures with the debian 10 hetzner installimage - the target server should be excluded now, due to the fact I installed the pbs with your release iso.
 
Hi @Stefan_R

Just to update: I got yesterday the new root server, installed this morning proxmox using the hetzner provided image - previous i installed it on top on the hetzner buster image - and I don't know what to say: Currently it seems to work (target is a pbx installed on the same host).

I'll try to do now some further testing and reinstall the whole server using our documentated way (hetzner deb buster image, install proxmox manualy). If I can reproduce the issue again, then something must be wrong with our docs or the debian hetzner image.
 
@Stefan_R I'm 100% able to reproduce the issue when I use hetzner's debian 10 buster with installimage, then install proxmox on top with the following commands:

Code:
Install proxmox:
echo "deb http://download.proxmox.com/debian/pve buster pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list
wget http://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
apt update && apt dist-upgrade -y
apt install proxmox-ve postfix open-iscsi -y
reboot

Create thin pool
lvcreate -L 100G -n data vg0
lvconvert --type thin-pool vg0/data
lvresize --poolmetadatasize +924M vg0/data
lvextend -l +100%FREE vg0/data

Followed by adding the storage to proxmox, network config and so on. Do you have maybe a hint for me, what could be wrong with that hetzner image? Reinstall all 4 standalone nodes is currently not an option and i would love to use pbs :).
 
I have not checked all details of this thread, so take this as silly question/hint only: have the source and the target the same timezone configured?
I saw a 1 hour diff between stamps.
And yesterday i had a weird problem with ceph/osd (wholy other topic, i know) due to different times. (reason was a to slow sync between nodes and a wrong time in bios of the node).
So this could be a culprit, i think.
 
Hi @wigor

Thanks for the input, just checked: both servers (proxmox node and pbs) are in the same timezone and have the same timesettings. So unlucky, but this isnt the source of my issue :(.
 
Hello,
may i dig this out. have a smiliar issue but not that constantly as described by OP (since PBS beta/PVE 6.2).
7 node PVE+Ceph cluster -> PBS 1.0-6. each node has 8-10 Windows VMs.
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.6-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-2
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1

In my case it happens sporadically once-twice a week (i run nightly backups, 7 days a week) with different VMs and on different nodes.

So for tonight, In PVE Backup Log:
Code:
607: 2021-01-05 23:40:21 ERROR: VM 607 qmp command 'backup' failed - backup register image failed: command error: no previous backup found, cannot do incremental backup
607: 2021-01-05 23:40:21 ERROR: Backup of VM 607 failed - VM 607 qmp command 'backup' failed - backup register image failed: command error: no previous backup found, cannot do incremental backup

in PBS task log for this VM:
Code:
2021-01-05T23:39:48+01:00: starting new backup on datastore 'ait': "vm/607/2021-01-05T22:39:43Z"
2021-01-05T23:39:48+01:00: download 'index.json.blob' from previous backup.
2021-01-05T23:39:48+01:00: register chunks in 'drive-scsi0.img.fidx' from previous backup.
2021-01-05T23:39:48+01:00: download 'drive-scsi0.img.fidx' from previous backup.
2021-01-05T23:40:19+01:00: backup failed: connection error: error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac:../ssl/record/rec_layer_s3.c:1544:SSL alert number 20
2021-01-05T23:40:19+01:00: removing failed backup
2021-01-05T23:40:19+01:00: TASK ERROR: connection error: error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac:../ssl/record/rec_layer_s3.c:1544:SSL alert number 20

Could it happen because of multiple concurrent backups (7 nodes at one time)? Network connection or some service is overloaded? (PBS is in another colocation, 1gbps fiber line).

Thanks in advance
 
Hi everyone, and upfront sorry for digging out this old topic. However we finally found the time to look at the issue of @ScIT again and I think we were able to pinpoint what's causing it.

As described in the opening post for some reason incremental backups were not possible aka the second snapshot/suspend backup to a PBS failed every time with an error message like

Code:
ERROR: VM 101 qmp command 'backup' failed - backup register image failed: command error: no previous backup found, cannot do incremental backup

Only new full backups (aka stopped) worked without problems.

We could do some more tests with different systems now, focussing on the partitioning/filesystems used during setup (LVM/LVM-thin on the client).

To make it short: it turns out that using reiserfs for /tmp (separate partition on LVM) somehow interferes and causes the problem.
My best guess is that the json.blob that gets downloaded to check for status and previous backups cannot be saved properly or is missing whatever attributes. At least some data probably read and written to /tmp get borked when using that filesystem.

Why reiserfs you may ask? For no apparent reason, this simply has been listed within Hetzners installimage template as example to be used for /tmp for quite a while and has been part of the setup of the mentioned servers for a long time (and before the use of PBS anyway).

Obviously switching to a different filesystem for /tmp solved the initial problem.

Not sure, if you consider this a bug, maybe reiserfs being deprecated soon is not used a lot anymore - however, it seems worth mentioning, that it can cause problems here.

Let me know, if there are more informations needed... much regards.
 
  • Like
Reactions: Neobin and RaphaelS
If anyone comes across this issue you can add this to /etc/fstab to get around it on the proxmox node. Swtching /tmp to tmpfs. I'm not running reiserfs but clearly something with the /tmp folder was the problem for me.

tmpfs /tmp tmpfs rw,mode=1777,size=2g

Run a mount -a as well to mount it
 
  • Like
Reactions: popallo
If anyone comes across this issue you can add this to /etc/fstab to get around it on the proxmox node. Swtching /tmp to tmpfs. I'm not running reiserfs but clearly something with the /tmp folder was the problem for me.

tmpfs /tmp tmpfs rw,mode=1777,size=2g

Run a mount -a as well to mount it
Thx @zanderson-aim because your solution works perfect for me too.

/tmp was on reiserfs on my server before your tips.
 
This seems to be reoccurring, and also within our 8Node Cluster.
I tried @zanderson-aim 's method but to no avail.
The only "workaround" that helps is a "halt" of the VM or a reboot,
then do a single backup that works 100%
after that, it's a dice roll.

We can also not narrow down what VMs(configs, OS, cache modes etc.) are more susceptible to this.

The qemu agent is pausing then flushing/syncing guest VM.fileIO right ? and then freezing right ?
 
I have the same problem. Some VMs have error and I cannot create Backups. I think the problem exists since Upgrade to PBS 3.2 and Pve 8.2

I have arround 150vms and 10 makes Problems.

Any Ideas?

Thank you

Kind regards
Simom
 
Hi,
I have the same problem. Some VMs have error and I cannot create Backups. I think the problem exists since Upgrade to PBS 3.2 and Pve 8.2
please share the backup task log for a problematic backup from both Proxmox VE and PBS as well as the configuration of an affected VM with qm config <ID> replacing <ID> with the actual ID of the VM.
 
Hi Fiona,

thank you for your reply.
Maybe some more information:

- As mentioned from other users I've tried to stop the affected vms then the backup works for one or two days and now the problem is back again
- I have some vms where the backup works one day and on the other it doesn't and so on

Now here the requested information:

qm config:
Code:
root@pve:~# qm config 168
agent: 1
balloon: 6144
bios: ovmf
boot: order=scsi0;ide2
cores: 2
cpu: EPYC
efidisk0: PVE_SATA_01:168/vm-168-disk-0.qcow2,efitype=4m,pre-enrolled-keys=1,size=528K
hotplug: disk,network,usb,memory,cpu
ide2: none,media=cdrom
machine: q35
memory: 8192
meta: creation-qemu=7.2.0,ctime=1686807027
name: <vm-name>
net0: virtio=<macaddr>,bridge=vmbr909,firewall=1
numa: 1
onboot: 1
ostype: l26
rng0: source=/dev/urandom
scsi0: PVE_SSD_01:168/vm-168-disk-0.qcow2,iothread=1,size=120G
scsihw: virtio-scsi-single
smbios1: uuid=14778477-b6db-4372-93f8-5ea9a623ebc4
sockets: 1
vmgenid: db76ac57-4026-42f5-8a66-0c4b50f24204

PVE-Backup-Log:
Code:
168: 2024-06-26 01:14:36 INFO: Starting Backup of VM 168 (qemu)
168: 2024-06-26 01:14:36 INFO: status = running
168: 2024-06-26 01:14:37 INFO: VM Name: <vm-name>
168: 2024-06-26 01:14:37 INFO: include disk 'scsi0' 'PVE_SSD_01:168/vm-168-disk-0.qcow2' 120G
168: 2024-06-26 01:14:37 INFO: include disk 'efidisk0' 'PVE_SATA_01:168/vm-168-disk-0.qcow2' 528K
168: 2024-06-26 01:14:37 INFO: backup mode: snapshot
168: 2024-06-26 01:14:37 INFO: ionice priority: 7
168: 2024-06-26 01:14:37 INFO: creating Proxmox Backup Server archive 'vm/168/2024-06-25T23:14:36Z'
168: 2024-06-26 01:14:37 INFO: issuing guest-agent 'fs-freeze' command
168: 2024-06-26 01:14:37 INFO: issuing guest-agent 'fs-thaw' command
168: 2024-06-26 01:14:37 ERROR: VM 168 qmp command 'backup' failed - backup register image failed: command error: no previous backup found, cannot do incremental backup
168: 2024-06-26 01:14:37 INFO: aborting backup job
168: 2024-06-26 01:14:37 INFO: resuming VM again
168: 2024-06-26 01:14:37 ERROR: Backup of VM 168 failed - VM 168 qmp command 'backup' failed - backup register image failed: command error: no previous backup found, cannot do incremental backup


PBS-Tasks-Log:
Code:
D4/UPID:pbs:00011F53:011064D4:00000336:667B4F54:backup:pbs\x2dbackup\x3avm-168:pve-user@pbs::2024-06-26T01:14:28+02:00: starting new backup on datastore 'pbs-backup' from ::ffff:172.xx.xx.xx: "vm/168/2024-06-25T23:14:36Z"
D4/UPID:pbs:00011F53:011064D4:00000336:667B4F54:backup:pbs\x2dbackup\x3avm-168:pve-user@pbs::2024-06-26T01:14:28+02:00: GET /previous: 400 Bad Request: Unable to open fixed index "/pbs-backup/vm/168/2024-06-24T21:02:37Z/drive-efidisk0.img.fidx" - got unknown magic number

PBS-syslog:
Code:
2024-06-26T01:14:28.513039+02:00 pbs proxmox-backup-proxy[73555]: starting new backup on datastore 'pbs-backup' from ::ffff:172.xx.xx.xx: "vm/168/2024-06-25T23:14:36Z"
2024-06-26T01:14:28.555295+02:00 pbs proxmox-backup-proxy[73555]: GET /previous: 400 Bad Request: Unable to open fixed index "/pbs-backup/vm/168/2024-06-24T21:02:37Z/drive-efidisk0.img.fidx" - got unknown magic number
2024-06-26T01:14:28.556041+02:00 pbs proxmox-backup-proxy[73555]: backup ended and finish failed: backup ended but finished flag is not set.
2024-06-26T01:14:28.556091+02:00 pbs proxmox-backup-proxy[73555]: removing unfinished backup
2024-06-26T01:14:28.556213+02:00 pbs proxmox-backup-proxy[73555]: removing backup snapshot "/pbs-backup/vm/168/2024-06-25T23:14:36Z"
2024-06-26T01:14:28.558647+02:00 pbs proxmox-backup-proxy[73555]: TASK ERROR: backup ended but finished flag is not set.


Thank you very much for your help & regards
Simon
 
Hi Fiona,

thank you for your reply.
Maybe some more information:

- As mentioned from other users I've tried to stop the affected vms then the backup works for one or two days and now the problem is back again
- I have some vms where the backup works one day and on the other it doesn't and so on

Now here the requested information:

qm config:
Code:
root@pve:~# qm config 168
agent: 1
balloon: 6144
bios: ovmf
boot: order=scsi0;ide2
cores: 2
cpu: EPYC
efidisk0: PVE_SATA_01:168/vm-168-disk-0.qcow2,efitype=4m,pre-enrolled-keys=1,size=528K
hotplug: disk,network,usb,memory,cpu
ide2: none,media=cdrom
machine: q35
memory: 8192
meta: creation-qemu=7.2.0,ctime=1686807027
name: <vm-name>
net0: virtio=<macaddr>,bridge=vmbr909,firewall=1
numa: 1
onboot: 1
ostype: l26
rng0: source=/dev/urandom
scsi0: PVE_SSD_01:168/vm-168-disk-0.qcow2,iothread=1,size=120G
scsihw: virtio-scsi-single
smbios1: uuid=14778477-b6db-4372-93f8-5ea9a623ebc4
sockets: 1
vmgenid: db76ac57-4026-42f5-8a66-0c4b50f24204

PVE-Backup-Log:
Code:
168: 2024-06-26 01:14:36 INFO: Starting Backup of VM 168 (qemu)
168: 2024-06-26 01:14:36 INFO: status = running
168: 2024-06-26 01:14:37 INFO: VM Name: <vm-name>
168: 2024-06-26 01:14:37 INFO: include disk 'scsi0' 'PVE_SSD_01:168/vm-168-disk-0.qcow2' 120G
168: 2024-06-26 01:14:37 INFO: include disk 'efidisk0' 'PVE_SATA_01:168/vm-168-disk-0.qcow2' 528K
168: 2024-06-26 01:14:37 INFO: backup mode: snapshot
168: 2024-06-26 01:14:37 INFO: ionice priority: 7
168: 2024-06-26 01:14:37 INFO: creating Proxmox Backup Server archive 'vm/168/2024-06-25T23:14:36Z'
168: 2024-06-26 01:14:37 INFO: issuing guest-agent 'fs-freeze' command
168: 2024-06-26 01:14:37 INFO: issuing guest-agent 'fs-thaw' command
168: 2024-06-26 01:14:37 ERROR: VM 168 qmp command 'backup' failed - backup register image failed: command error: no previous backup found, cannot do incremental backup
168: 2024-06-26 01:14:37 INFO: aborting backup job
168: 2024-06-26 01:14:37 INFO: resuming VM again
168: 2024-06-26 01:14:37 ERROR: Backup of VM 168 failed - VM 168 qmp command 'backup' failed - backup register image failed: command error: no previous backup found, cannot do incremental backup


PBS-Tasks-Log:
Code:
D4/UPID:pbs:00011F53:011064D4:00000336:667B4F54:backup:pbs\x2dbackup\x3avm-168:pve-user@pbs::2024-06-26T01:14:28+02:00: starting new backup on datastore 'pbs-backup' from ::ffff:172.xx.xx.xx: "vm/168/2024-06-25T23:14:36Z"
D4/UPID:pbs:00011F53:011064D4:00000336:667B4F54:backup:pbs\x2dbackup\x3avm-168:pve-user@pbs::2024-06-26T01:14:28+02:00: GET /previous: 400 Bad Request: Unable to open fixed index "/pbs-backup/vm/168/2024-06-24T21:02:37Z/drive-efidisk0.img.fidx" - got unknown magic number

PBS-syslog:
Code:
2024-06-26T01:14:28.513039+02:00 pbs proxmox-backup-proxy[73555]: starting new backup on datastore 'pbs-backup' from ::ffff:172.xx.xx.xx: "vm/168/2024-06-25T23:14:36Z"
2024-06-26T01:14:28.555295+02:00 pbs proxmox-backup-proxy[73555]: GET /previous: 400 Bad Request: Unable to open fixed index "/pbs-backup/vm/168/2024-06-24T21:02:37Z/drive-efidisk0.img.fidx" - got unknown magic number
2024-06-26T01:14:28.556041+02:00 pbs proxmox-backup-proxy[73555]: backup ended and finish failed: backup ended but finished flag is not set.
2024-06-26T01:14:28.556091+02:00 pbs proxmox-backup-proxy[73555]: removing unfinished backup
2024-06-26T01:14:28.556213+02:00 pbs proxmox-backup-proxy[73555]: removing backup snapshot "/pbs-backup/vm/168/2024-06-25T23:14:36Z"
2024-06-26T01:14:28.558647+02:00 pbs proxmox-backup-proxy[73555]: TASK ERROR: backup ended but finished flag is not set.


Thank you very much for your help & regards
Simon
The root cause seems to be Unable to open fixed index "/pbs-backup/vm/168/2024-06-24T21:02:37Z/drive-efidisk0.img.fidx" - got unknown magic number. Please see here for a thread with a similar (same?) issue: https://forum.proxmox.com/threads/backup-suceeds-but-ends-up-failing-verification.149249/post-676533

Is it always an EFI disk with this error?

On what kind of physical disks/file systems is your datastore? Are you using any special mount options? Please check the health of the physical drives, e.g. smartctl -a /dev/XYZ
 
Hi Fiona,

did you mean the Backup-Store on PBS?
--> It's mounted via CephFS (similar to the other post) which worked fine BEVORE the update.

regards
Simon
 
Hi Fiona,

did you mean the Backup-Store on PBS?
--> It's mounted via CephFS (similar to the other post) which worked fine BEVORE the update.

regards
Simon
Hi,
what is the health status of you Ceph cluster? Can you see the same kind of data corruption to the fixed index (.fidx) file as reported in the other thread?
Please post the output of the fixed index header by running head -c8 <path-to-fixed-index-for-image>.img.fidx | hexdump. This should match the fixed index magic number 7f2f ed41 fd91 cd0f, which it seems is not the case for that index.

Further, please provide the full output of pveversion -v and proxmox-backup-manager versions --verbose.

Edit: Request additional debug output
 
Last edited:
  • Like
Reactions: fiona
Hi Chris,

the Ceph-Cluster is healthy.
For the other 150 VMs the Backup works fine.

Here the requested output:

Code:
root@pve:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.8-1-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-13
proxmox-kernel-6.8: 6.8.8-1
proxmox-kernel-6.8.8-1-pve-signed: 6.8.8-1
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
pve-kernel-5.15.152-1-pve: 5.15.152-1
pve-kernel-5.15.149-1-pve: 5.15.149-1
pve-kernel-5.15.143-1-pve: 5.15.143-1
pve-kernel-5.15.131-2-pve: 5.15.131-3
pve-kernel-5.15.131-1-pve: 5.15.131-2
pve-kernel-5.15.108-1-pve: 5.15.108-2
pve-kernel-5.15.107-1-pve: 5.15.107-1
pve-kernel-5.15.104-1-pve: 5.15.104-2
pve-kernel-5.15.102-1-pve: 5.15.102-1
amd64-microcode: 3.20230808.1.1~deb12u1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.4-1
proxmox-backup-file-restore: 3.2.4-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.12-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1

Code:
root@pbs:~# proxmox-backup-manager versions --verbose
proxmox-backup                    3.2.0        running kernel: 6.8.4-3-pve
proxmox-backup-server             3.2.6-1      running version: 3.2.4     
proxmox-kernel-helper             8.1.0                                   
proxmox-kernel-6.8                6.8.8-2                                 
proxmox-kernel-6.8.8-1-pve-signed 6.8.8-1                                 
proxmox-kernel-6.8.4-3-pve-signed 6.8.4-3                                 
proxmox-kernel-6.8.4-2-pve-signed 6.8.4-2                                 
ifupdown2                         3.2.0-1+pmx8                           
libjs-extjs                       7.0.0-4                                 
proxmox-backup-docs               3.2.6-1                                 
proxmox-backup-client             3.2.6-1                                 
proxmox-mail-forward              0.2.3                                   
proxmox-mini-journalreader        1.4.0                                   
proxmox-offline-mirror-helper     0.6.6                                   
proxmox-widget-toolkit            4.2.3                                   
pve-xtermjs                       5.3.0-3                                 
smartmontools                     7.3-pve1                               
zfsutils-linux                    2.2.4-pve1

Thank you & regards
Simon
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!