Backup taking a long time at "add blob" step.

Monki77

New Member
Oct 2, 2024
8
1
3
Recently, one of my PVE (PVE01) hosts has started taking hours to backup.... hanging on this step, but only for 1 of the VMs:
2024-12-10T02:31:03+00:00: add blob "/mnt/pve01-backup/vm/100/2024-12-10T02:30:02Z/index.json.blob" (435 bytes, comp: 435)
It's currently 13:05 and it's still doing it (10hours 30mins).

It used to take seconds, but for over a week now it's been taking hours. My other host (PVE02) does no have a problem.

Both PVE01 and PVE02 physical hosts have PBS installed alongside PVE... Not ideal i know, but I have no choice.
The datastore for each PVE host is on a 12TB HDD mirrored NAS, an ASUSTOR AS1102T and mounted via fstab using the same command on each for their own share location - xxx.xxx.xxx.xxx:/volume1/Backups/pve01 /mnt/pve01-backup nfs defaults 0 0.

The VM it hangs on is a Win Server 2022 VM: local-zfs is a 960GB SSD mirror, STOR-PVE01-8TB is 2x 8TB HDD mirror.
VM INFO:
agent: 1,fstrim_cloned_disks=1
balloon: 0
bios: ovmf
boot: order=scsi0
cores: 8
cpu: host
cpuunits: 200
description: <b>Server 2022</b>
efidisk0: local-zfs:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hotplug: disk,network,usb
ide2: local:iso/virtio-win-0.1.262.iso,media=cdrom,size=708140K
localtime: 1
machine: pc-q35-8.1,viommu=virtio
memory: 6144
meta: creation-qemu=8.1.5,ctime=1728059349
name: XXXX
net0: virtio=BC:24:11:A7:30:56,bridge=vmbr0,firewall=1
numa: 1
onboot: 1
ostype: win11
scsi0: local-zfs:vm-100-disk-1,aio=native,discard=on,iothread=1,size=60G,ssd=1
scsi1: Stor-PVE01-8TB:vm-100-disk-1,aio=native,discard=on,iothread=1,size=7300G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=42db9df2-da75-464e-a4ba-9dca277b8c52
sockets: 1
startup: order=4,up=10
tablet: 1
tags: server2022
vmgenid: ffee5e25-73e0-4000-801a-6f810b4d3211

PBS 3.3.2
PVE 8.3.1



As mentioned, all other VMs, and the PVE02 host are absolutely fine....It's just this 1 VM when backing up Stor-PVE01-8TB disk.


I cannot find anything pointing to this "add blob" step and why it would be taking so long.
Any help would be appreciated!

Thanks
 
Last edited:
I can probably increase my ARC a couple of GB's and look into the metadata caching then.
Do you know why it would only be happening to this 1 VM? I can't put my finger on what changed, other than updating PBS.
Happy to try anything else if you need a test subject!
 
I can probably increase my ARC a couple of GB's and look into the metadata caching then.
Your datastore is located on NFS however? Adaption to the metadata caching to reduce impact for the stat calls is required on the PBS datastore side, not the source (PVE) side.

Do you know why it would only be happening to this 1 VM?
Simply because of the size of the attached disk. Or do you have other similar sized VMs with are not affected by this?
 
Ahh. yes, sorry I overlooked the datastore part.
Yes it's on a 12TB NAS, which is via NFS... so nevermind!

My other host has a VM on with a 4TB pool for data, which upon checking the task does actually take a long time compared to a few weeks ago... so it would seem i've overlooked that :(

PVE02:
INFO: 100% (2.2 GiB of 2.2 GiB) in 31s, read: 50.7 MiB/s, write: 49.3 MiB/s
INFO: backup was done incrementally, reused 3.54 TiB (99%)
INFO: transferred 2.16 GiB in 19042 seconds (118.7 KiB/s)

PBS02:
2024-12-13T04:53:10+00:00: add blob "/mnt/pve02-backup/vm/105/2024-12-13T04:52:32Z/index.json.blob" (382 bytes, comp: 382)
2024-12-13T10:09:55+00:00: successfully finished backup
2024-12-13T10:09:55+00:00: backup finished successfully
2024-12-13T10:09:55+00:00: TASK OK


PVE01:
INFO: 100% (14.7 GiB of 14.7 GiB) in 4m 8s, read: 30.0 MiB/s, write: 17.0 MiB/s
INFO: backup is sparse: 996.00 MiB (6%) total zero data
INFO: backup was done incrementally, reused 7.17 TiB (99%)
INFO: transferred 14.66 GiB in 39157 seconds (392.6 KiB/s)

PBS01:
2024-12-13T02:34:21+00:00: add blob "/mnt/pve01-backup/vm/100/2024-12-13T02:30:03Z/index.json.blob" (436 bytes, comp: 436)
2024-12-13T13:22:49+00:00: successfully finished backup
2024-12-13T13:22:49+00:00: backup finished successfully
2024-12-13T13:22:49+00:00: TASK OK


I think what I will do for now is just stagger the backups more so they both don't saturate the NAS. I guess maybe it'll help a bit.


Sorry to be a pain.... are you able to briefly explain the basics of what is going on at this step to have such a slow speed?
INFO: transferred 14.66 GiB in 39157 seconds (392.6 KiB/s)
 
Sorry to be a pain.... are you able to briefly explain the basics of what is going on at this step to have such a slow speed?
INFO: transferred 14.66 GiB in 39157 seconds (392.6 KiB/s)
If you are referring to the log line below, this is just an average over the full backup task... So given that the check for known chunk existence, performed at the end takes up a lot of time, increasing the total runtime, the average as calculated based on the total runtime will be low.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!