Super slow speed when backup NAS.

nwongrat

Member
Feb 16, 2023
34
0
6
Q2Sdehj.png


I am using pbs for backingup. It work as it should when backup other NAS, VM and CT. However, when it come to this NAS which I include every passthrough disk. Backup job never finished. Sometime it timeout, sometime I have to cancelled it.

I have 2 NAS which running TRUENAS Scale. The first one I backup only the operating system. I exclude all the disk because all those datas are transferred to cloud everyday. However, the 2nd NAS contain VMs (NFS Share) which share for all the users. At this moment, all of those share VMs are not being used to any client.

What could be the most likely reason for the super low speed and timeout?

Thanks
 
My guess is that because you are using passthough hardware a new dirty-bitmap has to be created every single time the backup runs - this means all sectors of the disk needs to be read, even if they do not contain data as far as I understand.

I think it would be better for you to back up the NAS using the backup client directly on the NAS and backup the files - and not the drives.

i.e. install the package for the pbs client onto the NAS and use that to backup the files.

That should allow you to backup your data - and not suffer from the slow speed when creating the dirty bitmaps.

i.e. something similar to the below.

Code:
BACKUP_ID=`hostname`
/usr/local/sbin/proxmox-backup-client backup root.pxar:/yourfolder --backup-id $BACKUP_ID --repository 'backup@pbs!backup@<pbs-server-hostname>:backup'

But take a look at https://pbs.proxmox.com/docs/backup-client.html#creating-backups

I doubt you will _EVER_ get the backup server to work great with backing up pass though disks inside PVE.

It is not built to backup real disks, but qemu disks where the disk manager can tell what blocks are changed - physical disks do not have this feature, so the client has to read all sectors of the disk.
 
Last edited:
Hi,
My guess is that because you are using passthough hardware a new dirty-bitmap has to be created every single time the backup runs - this means all sectors of the disk needs to be read, even if they do not contain data as far as I understand.
no, dirty bitmaps are also created and handled for passed-through drives.

I am using pbs for backingup. It work as it should when backup other NAS, VM and CT. However, when it come to this NAS which I include every passthrough disk. Backup job never finished. Sometime it timeout, sometime I have to cancelled it.
The reason the dirty bitmaps need to be recreated is because the backup failed last time.

What could be the most likely reason for the super low speed and timeout?
How does CPU, network and IO load on the server look like during backup? If you have high IO wait, have a look here. The read speed doesn't look terrible to be honest and with PBS only new chunks need to be copied, so the write speed is not actually telling that it's slow. Since you already attempted a backup before the initial chunks will already be on PBS. Can you share the full backup log where a timeout occurred?
 
I attempted to reproduce the issue and there seems to be indeed an issue with backup and passed-through disks. But for me, the VM itself completely hangs as well after a bit (ran into the issue with both QEMU 7.1 and 7.2). Does that happen for you too? Please post the output of pveversion -v and qm config <ID> for the VM.

EDIT: If you are using kernel 6.1, can you reboot with kernel 5.15 instead and see if the issue is present there as well? For me, it seems to work with 5.15.
 
Last edited:
no, dirty bitmaps are also created and handled for passed-through drives.
Okay thats cool I was not aware that it would work - but I mean the code would still have to read all sectors right?

Its not like you can ask a physical disk "What sectors changed since XXX" for a quemu disk you can as far as I know, which is why its fast to process a dirty bitmap on a virtual disk.
 
Okay thats cool I was not aware that it would work - but I mean the code would still have to read all sectors right?
No. QEMU assumes exclusive access to the passed-through disk (you get all other kinds of problems if you'd write to the disk at the same time as QEMU), so it knows about all writes and can track with a bitmap which clusters/ranges are unchanged and which are dirty. It doesn't matter if the underlying disk is physical or virtual for the bitmap/tracking implementation. This is abstracted away by the block layer in QEMU.
 
No. QEMU assumes exclusive access to the passed-through disk (you get all other kinds of problems if you'd write to the disk at the same time as QEMU), so it knows about all writes and can track with a bitmap which clusters/ranges are unchanged and which are dirty. It doesn't matter if the underlying disk is physical or virtual for the bitmap/tracking implementation. This is abstracted away by the block layer in QEMU.
Okay thats cool :) And I learnt something today.
 
FYI, assuming it was the same issue, a fix has been added and will be in the next PVE 6.1 kernel, i.e. the one after 6.1.14-1-pve.
 
  • Like
Reactions: Neobin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!