No previous backup found, cannot do incremental backup

We don't accept payment for specific issues. We did just announce the first stable version of PBS, and support subscriptions are now available via our website, but that is unrelated to specific issues you want fixed.

Since I can't reproduce it here, I'm still assuming something is wrong with your setup - either software or hardware. Have you tried on different hardware? Maybe try installing from our ISO installers as well, just for testing?
 
Hi @Stefan_R

Sorry for the long delay, I just hadnt enough time to debug more, until now. What I tried: Installed proxmox backup server from your release iso, on the same proxmox standalone host which I want to backup vms with.

Added a data storage (/backup/nodename) and connected the proxmox server with it - still having the same issue after the second backup - first one goes trough properly, second one stuck with the same error message:

Code:
INFO: starting new backup job: vzdump 101 --remove 0 --mode snapshot --storage pb102 --node pm104
INFO: Starting Backup of VM 101 (qemu)
INFO: Backup started at 2020-11-28 13:47:29
INFO: status = running
INFO: VM Name: pbx101.xxx.xx
INFO: include disk 'scsi0' 'data:vm-101-disk-0' 160G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/101/2020-11-28T12:47:29Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '7d9151f7-6af6-4de3-8ed0-e7015bfc38a8'
INFO: resuming VM again
INFO: scsi0: dirty-bitmap status: created new
INFO:   0% (808.0 MiB of 160.0 GiB) in  3s, read: 269.3 MiB/s, write: 228.0 MiB/s
INFO:   1% (1.6 GiB of 160.0 GiB) in  6s, read: 277.3 MiB/s, write: 277.3 MiB/s
INFO:   2% (3.6 GiB of 160.0 GiB) in 14s, read: 257.5 MiB/s, write: 240.0 MiB/s
...
INFO:  88% (141.4 GiB of 160.0 GiB) in 53s, read: 4.4 GiB/s, write: 13.3 MiB/s
INFO:  96% (154.6 GiB of 160.0 GiB) in 56s, read: 4.4 GiB/s, write: 5.3 MiB/s
INFO: 100% (160.0 GiB of 160.0 GiB) in 57s, read: 5.4 GiB/s, write: 0 B/s
INFO: backup is sparse: 151.34 GiB (94%) total zero data
INFO: backup was done incrementally, reused 151.54 GiB (94%)
INFO: transferred 160.00 GiB in 57 seconds (2.8 GiB/s)
INFO: Finished Backup of VM 101 (00:00:57)
INFO: Backup finished at 2020-11-28 13:48:26
INFO: Backup job finished successfully
TASK OK

Code:
INFO: starting new backup job: vzdump 101 --node pm104 --storage pb102 --mode snapshot --remove 0
INFO: Starting Backup of VM 101 (qemu)
INFO: Backup started at 2020-11-28 13:49:25
INFO: status = running
INFO: VM Name: pbx101.xxx.xx
INFO: include disk 'scsi0' 'data:vm-101-disk-0' 160G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/101/2020-11-28T12:49:25Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
ERROR: VM 101 qmp command 'backup' failed - backup register image failed: command error: no previous backup found, cannot do incremental backup
INFO: aborting backup job
ERROR: Backup of VM 101 failed - VM 101 qmp command 'backup' failed - backup register image failed: command error: no previous backup found, cannot do incremental backup
INFO: Failed at 2020-11-28 13:49:26
INFO: Backup job finished with errors
TASK ERROR: job errors

Both machines, proxmox backup server and standalone host, are up to date, freshly patched and rebooted:
Code:
root@pm104 ~ # pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.73-1-pve)
pve-manager: 6.3-2 (running version: 6.3-2/22f57405)
pve-kernel-5.4: 6.3-1
pve-kernel-helper: 6.3-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-6
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.3-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-1
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-1
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1

Code:
root@pb102:~# proxmox-backup-manager versions
proxmox-backup-server 1.0.5-1 running version: 1.0.5

So probaly the proxmox standalone nodes are the issue - but there I can't make anything else: A hetzner ax41 server, installimage with debian10 and install proxmox according to the manual installation steps routine (as I already wrote above).

Do you maybe have any additional idea?
 
what was the last time you shut down the vm ? maybe it still has an older version of the library loaded?
 
Hi @dcsapak ~3 minutes before starting the first backup :).

I ordered an additional server on hetzner, I'll try with a full iso proxmox installation to verify, if the issue only occures with the debian 10 hetzner installimage - the target server should be excluded now, due to the fact I installed the pbs with your release iso.
 
Hi @Stefan_R

Just to update: I got yesterday the new root server, installed this morning proxmox using the hetzner provided image - previous i installed it on top on the hetzner buster image - and I don't know what to say: Currently it seems to work (target is a pbx installed on the same host).

I'll try to do now some further testing and reinstall the whole server using our documentated way (hetzner deb buster image, install proxmox manualy). If I can reproduce the issue again, then something must be wrong with our docs or the debian hetzner image.
 
@Stefan_R I'm 100% able to reproduce the issue when I use hetzner's debian 10 buster with installimage, then install proxmox on top with the following commands:

Code:
Install proxmox:
echo "deb http://download.proxmox.com/debian/pve buster pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list
wget http://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
apt update && apt dist-upgrade -y
apt install proxmox-ve postfix open-iscsi -y
reboot

Create thin pool
lvcreate -L 100G -n data vg0
lvconvert --type thin-pool vg0/data
lvresize --poolmetadatasize +924M vg0/data
lvextend -l +100%FREE vg0/data

Followed by adding the storage to proxmox, network config and so on. Do you have maybe a hint for me, what could be wrong with that hetzner image? Reinstall all 4 standalone nodes is currently not an option and i would love to use pbs :).
 
I have not checked all details of this thread, so take this as silly question/hint only: have the source and the target the same timezone configured?
I saw a 1 hour diff between stamps.
And yesterday i had a weird problem with ceph/osd (wholy other topic, i know) due to different times. (reason was a to slow sync between nodes and a wrong time in bios of the node).
So this could be a culprit, i think.
 
Hi @wigor

Thanks for the input, just checked: both servers (proxmox node and pbs) are in the same timezone and have the same timesettings. So unlucky, but this isnt the source of my issue :(.
 
Hello,
may i dig this out. have a smiliar issue but not that constantly as described by OP (since PBS beta/PVE 6.2).
7 node PVE+Ceph cluster -> PBS 1.0-6. each node has 8-10 Windows VMs.
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.6-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-2
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1

In my case it happens sporadically once-twice a week (i run nightly backups, 7 days a week) with different VMs and on different nodes.

So for tonight, In PVE Backup Log:
Code:
607: 2021-01-05 23:40:21 ERROR: VM 607 qmp command 'backup' failed - backup register image failed: command error: no previous backup found, cannot do incremental backup
607: 2021-01-05 23:40:21 ERROR: Backup of VM 607 failed - VM 607 qmp command 'backup' failed - backup register image failed: command error: no previous backup found, cannot do incremental backup

in PBS task log for this VM:
Code:
2021-01-05T23:39:48+01:00: starting new backup on datastore 'ait': "vm/607/2021-01-05T22:39:43Z"
2021-01-05T23:39:48+01:00: download 'index.json.blob' from previous backup.
2021-01-05T23:39:48+01:00: register chunks in 'drive-scsi0.img.fidx' from previous backup.
2021-01-05T23:39:48+01:00: download 'drive-scsi0.img.fidx' from previous backup.
2021-01-05T23:40:19+01:00: backup failed: connection error: error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac:../ssl/record/rec_layer_s3.c:1544:SSL alert number 20
2021-01-05T23:40:19+01:00: removing failed backup
2021-01-05T23:40:19+01:00: TASK ERROR: connection error: error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac:../ssl/record/rec_layer_s3.c:1544:SSL alert number 20

Could it happen because of multiple concurrent backups (7 nodes at one time)? Network connection or some service is overloaded? (PBS is in another colocation, 1gbps fiber line).

Thanks in advance
 
Hi everyone, and upfront sorry for digging out this old topic. However we finally found the time to look at the issue of @ScIT again and I think we were able to pinpoint what's causing it.

As described in the opening post for some reason incremental backups were not possible aka the second snapshot/suspend backup to a PBS failed every time with an error message like

Code:
ERROR: VM 101 qmp command 'backup' failed - backup register image failed: command error: no previous backup found, cannot do incremental backup

Only new full backups (aka stopped) worked without problems.

We could do some more tests with different systems now, focussing on the partitioning/filesystems used during setup (LVM/LVM-thin on the client).

To make it short: it turns out that using reiserfs for /tmp (separate partition on LVM) somehow interferes and causes the problem.
My best guess is that the json.blob that gets downloaded to check for status and previous backups cannot be saved properly or is missing whatever attributes. At least some data probably read and written to /tmp get borked when using that filesystem.

Why reiserfs you may ask? For no apparent reason, this simply has been listed within Hetzners installimage template as example to be used for /tmp for quite a while and has been part of the setup of the mentioned servers for a long time (and before the use of PBS anyway).

Obviously switching to a different filesystem for /tmp solved the initial problem.

Not sure, if you consider this a bug, maybe reiserfs being deprecated soon is not used a lot anymore - however, it seems worth mentioning, that it can cause problems here.

Let me know, if there are more informations needed... much regards.
 
  • Like
Reactions: Neobin and ScIT
If anyone comes across this issue you can add this to /etc/fstab to get around it on the proxmox node. Swtching /tmp to tmpfs. I'm not running reiserfs but clearly something with the /tmp folder was the problem for me.

tmpfs /tmp tmpfs rw,mode=1777,size=2g

Run a mount -a as well to mount it
 
  • Like
Reactions: popallo
If anyone comes across this issue you can add this to /etc/fstab to get around it on the proxmox node. Swtching /tmp to tmpfs. I'm not running reiserfs but clearly something with the /tmp folder was the problem for me.

tmpfs /tmp tmpfs rw,mode=1777,size=2g

Run a mount -a as well to mount it
Thx @zanderson-aim because your solution works perfect for me too.

/tmp was on reiserfs on my server before your tips.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!