Very large volume (10TB disk)

sander93

Renowned Member
Sep 30, 2014
57
2
73
Hello,

We are really happy with PBS! Back-up is much faster and we can hold more retention with the incremental/deduplcation function.

We have a dedicated PBS server with 12 x 8TB disk en 2 x Enterprise SSD for special device.
We currently have set special_small_blocks to 4K, is this the right value? (using only 110GB on special device with 28,4TB of back-up data in use).

Only problem i have is with very large volumes, we have 2 VM's with a 10TB volume each.
Back-up this VM is very time consuming (more then 24 hours).

Is there any way to speed this up?
I am also afraid that after a cold boot (no dirty bitmap) the whole volume need to be backupped completly again.

The VM's are Windows VM's.

Kind Regards,

Sander
 
We currently have set special_small_blocks to 4K, is this the right value? (using only 110GB on special device with 28,4TB of back-up data in use).
looks ok, the most speedup you gain is from the metadata handled by the special device, since in a datastore
the files are not that small (we aim for chunk size ~4MiB)

Only problem i have is with very large volumes, we have 2 VM's with a 10TB volume each.
Back-up this VM is very time consuming (more then 24 hours).

Is there any way to speed this up?
do you backup over gigabit? my napkin math says that 10TiB over 24hours ~ 120MiB/s so just under the gigabit limit
aside from throwing more hardware on it (faster source storage/faster network/etc) there are not really options to speed that up
besides using the dirty-bitmap feature

I am also afraid that after a cold boot (no dirty bitmap) the whole volume need to be backupped completly again.
yes this is normal since we cannot (atm) persist the dirty-bitmap on vm stop
 
  • Like
Reactions: velocity08
Sorry for the late reply..

We back-up over 10GB network (actually 2 x 10GB LACP).

Back-up source is a Ceph cluster, can this be the problem?
 
  • Like
Reactions: velocity08
Interested in this thread as we are currently working with PBS to find the best fit special_small_blocks size

as a question what block size did you set on your ZFS datastore is it the default 128k block size?

if this is default you'll find that it may perform better with a 1M block size for the datastore pool for writes/ reads.

@dcsapak please correct me if im mistaken, if the datastore block size is set to 1M this should speed up write/ read performance as the data being written to the datastore are larger in size than 128k

this should make a difference to spinning rust drives, less of an issue for SSD.

""Cheers
G
 
@dcsapak please correct me if im mistaken, if the datastore block size is set to 1M this should speed up write/ read performance as the data being written to the datastore are larger in size than 128k
i am not sure what you mean, are you talking about 'special_small_blocks' or the recordsize? the first only influences where the blocks are save ("normal" vdevs or "special device") and the second only says what the maximum record_size is of a file before splitting
 
  • Like
Reactions: velocity08
i am not sure what you mean, are you talking about 'special_small_blocks' or the recordsize? the first only influences where the blocks are save ("normal" vdevs or "special device") and the second only says what the maximum record_size is of a file before splitting
record size for ZFS storage target. (not the special device)

""Cheers
G
 
ok yes, if you set that higher, for larger files this should result in less read operations per file. which value is good , depends on the exact data and the underlying disks
note that if you change that, it is only in effect for newly written files
 
  • Like
Reactions: velocity08
Back-up source is a Ceph cluster, can this be the problem?

... anything could be a problem! I would try to copy with dd from the PMX host where is running your BIG windows VM to /dev/null. In this case you can see the reading speed on the source VM. If the speed is OK, then you muste go forward and check other things!

Good luck / Bafta!
 
ok yes, if you set that higher, for larger files this should result in less read operations per file. which value is good , depends on the exact data and the underlying disks
note that if you change that, it is only in effect for newly written files
so we are talking about PBS, wouldn't all backups and their respective files be larger than 128K?
If 4K and smaller are heading over to the special_device wouldn't setting the normal RaidZ2 be best set to 512 or 1M record size?

what other files are being stored on the RaidZ2 that are going to be smaller than 512K or 1 M if primary backing up VM's as they consist of the VM image.

maybe im missing part of the picture.

The only other thought that comes to mind would be when backup up PVE it self this would these be stored as files in a folder structure?

Then maybe defining VM record size as 512k or 1M and file back up like PVE onto 128K datastore?

thoughts and guidance would be greatly appreciated.

""Cheers
G
 
so we are talking about PBS, wouldn't all backups and their respective files be larger than 128K?
only the chunks, the metadata files ( e.g. manifest, didx, fidx) are probably not much bigger than a few hundred k

but if increasing the value also increases performance would be left to show. maybe you can try to set it higher and do a few benchmarks? (you'd have ofc to rewrite the whole datastore so that the chunks are newly written)
 
  • Like
Reactions: velocity08

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!