Hi I got a PBS server for manage the backup from PVE.
The size of the each vm backup file is about 48TB.
I want do the verify jobs after the new backup has created. But it has to take about 3 days to complete.
It really take too long. Is there any way to make's it faster?
The averge speed during the backup is about 200MB/s and it's stable. In my opinion it was too slow for my setup maybe?
Resourse comsume is about 3.11% of 64 CPU(s) and IO delay 1-2%
The datastore I used is ZFS which 3XRaidz2(each vdev has 6 spining disks with 6TB volume) with special mirror vdev(2X3.2T nvme ssd),
and using default 128K configuration for 'special_small_blocks'. The pool is enable compression using default level zstd.
BKP 53.4T 47.7T 5.56K 97 190M 6.53M
raidz2-0 18.6T 14.1T 1.94K 3 66.5M 50.0K
scsi-35000c50083e2c623 - - 332 0 11.1M 8.33K
scsi-35000c50083e2ccd3 - - 332 0 11.1M 8.33K
scsi-35000c50083e2ee07 - - 331 0 11.1M 8.33K
scsi-35000c50083e2f0e3 - - 330 0 11.0M 8.33K
scsi-35000c50083e2f2cf - - 330 0 11.0M 8.33K
scsi-35000c50083e315fb - - 331 0 11.1M 8.33K
raidz2-1 18.7T 14.0T 1.95K 3 66.9M 46.8K
scsi-35000c500843e110b - - 333 0 11.2M 7.80K
scsi-35000c50084413443 - - 333 0 11.2M 7.80K
scsi-35000c5008444e6a3 - - 332 0 11.1M 7.80K
scsi-35000c500845e6857 - - 331 0 11.1M 7.80K
scsi-35000c500845e95c3 - - 332 0 11.1M 7.80K
scsi-35000c500845ea863 - - 332 0 11.2M 7.80K
raidz2-2 15.5T 17.2T 1.61K 3 55.9M 29.9K
scsi-35000c5008472803f - - 274 0 9.30M 4.98K
scsi-35000c50084a653bb - - 273 0 9.31M 4.98K
scsi-35000c50084a6fea3 - - 274 0 9.33M 4.98K
scsi-35000c50084aa7ac3 - - 274 0 9.33M 4.98K
scsi-35000c50084b378c3 - - 274 0 9.32M 4.98K
scsi-35000c50084b3834f - - 274 0 9.31M 4.98K
special - - - - - -
mirror-3 586G 2.33T 65 86 539K 6.41M
nvme-MEMBLAZE_P6536CH0320M00_SH222503665 - - 32 43 270K 3.21M
nvme-MEMBLAZE_P6536CH0320M00_SH222902758 - - 32 43 269K 3.21M
CPU is AMD EPYC 7D12 64 core, with 128GB RAM
Kernel Version Linux 6.8.12-13-pve (2025-07-22T10:00Z)
I noticed that even I set special_small_blocks=128K, there still a lot 32K block reading from the spining disks.Is that a problem? (below is zpool iostats -rv 1 output)
scsi-35000c50084a6fea3 sync_read sync_write async_read async_write scrub trim rebuild
req_size ind agg ind agg ind agg ind agg ind agg ind agg ind agg
-------------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
512 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1K 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2K 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4K 0 0 0 0 0 0 0 0 0 0 0 0 0 0
8K 0 0 0 0 0 0 0 0 0 0 0 0 0 0
16K 0 0 0 0 0 0 0 0 0 0 0 0 0 0
32K 183 0 0 0 90 0 0 0 0 0 0 0 0 0
64K 0 0 0 0 0 1 0 0 0 0 0 0 0 0
128K 0 0 0 0 0 0 0 0 0 0 0 0 0 0
256K 0 0 0 0 0 0 0 0 0 0 0 0 0 0
512K 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1M 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2M 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4M 0 0 0 0 0 0 0 0 0 0 0 0 0 0
8M 0 0 0 0 0 0 0 0 0 0 0 0 0 0
16M 0 0 0 0 0 0 0 0 0 0 0 0 0 0
----------------------------------------------------------------------------------------------------------------------------------------------
The size of the each vm backup file is about 48TB.
I want do the verify jobs after the new backup has created. But it has to take about 3 days to complete.
It really take too long. Is there any way to make's it faster?
The averge speed during the backup is about 200MB/s and it's stable. In my opinion it was too slow for my setup maybe?
Resourse comsume is about 3.11% of 64 CPU(s) and IO delay 1-2%
The datastore I used is ZFS which 3XRaidz2(each vdev has 6 spining disks with 6TB volume) with special mirror vdev(2X3.2T nvme ssd),
and using default 128K configuration for 'special_small_blocks'. The pool is enable compression using default level zstd.
BKP 53.4T 47.7T 5.56K 97 190M 6.53M
raidz2-0 18.6T 14.1T 1.94K 3 66.5M 50.0K
scsi-35000c50083e2c623 - - 332 0 11.1M 8.33K
scsi-35000c50083e2ccd3 - - 332 0 11.1M 8.33K
scsi-35000c50083e2ee07 - - 331 0 11.1M 8.33K
scsi-35000c50083e2f0e3 - - 330 0 11.0M 8.33K
scsi-35000c50083e2f2cf - - 330 0 11.0M 8.33K
scsi-35000c50083e315fb - - 331 0 11.1M 8.33K
raidz2-1 18.7T 14.0T 1.95K 3 66.9M 46.8K
scsi-35000c500843e110b - - 333 0 11.2M 7.80K
scsi-35000c50084413443 - - 333 0 11.2M 7.80K
scsi-35000c5008444e6a3 - - 332 0 11.1M 7.80K
scsi-35000c500845e6857 - - 331 0 11.1M 7.80K
scsi-35000c500845e95c3 - - 332 0 11.1M 7.80K
scsi-35000c500845ea863 - - 332 0 11.2M 7.80K
raidz2-2 15.5T 17.2T 1.61K 3 55.9M 29.9K
scsi-35000c5008472803f - - 274 0 9.30M 4.98K
scsi-35000c50084a653bb - - 273 0 9.31M 4.98K
scsi-35000c50084a6fea3 - - 274 0 9.33M 4.98K
scsi-35000c50084aa7ac3 - - 274 0 9.33M 4.98K
scsi-35000c50084b378c3 - - 274 0 9.32M 4.98K
scsi-35000c50084b3834f - - 274 0 9.31M 4.98K
special - - - - - -
mirror-3 586G 2.33T 65 86 539K 6.41M
nvme-MEMBLAZE_P6536CH0320M00_SH222503665 - - 32 43 270K 3.21M
nvme-MEMBLAZE_P6536CH0320M00_SH222902758 - - 32 43 269K 3.21M
CPU is AMD EPYC 7D12 64 core, with 128GB RAM
Kernel Version Linux 6.8.12-13-pve (2025-07-22T10:00Z)
I noticed that even I set special_small_blocks=128K, there still a lot 32K block reading from the spining disks.Is that a problem? (below is zpool iostats -rv 1 output)
scsi-35000c50084a6fea3 sync_read sync_write async_read async_write scrub trim rebuild
req_size ind agg ind agg ind agg ind agg ind agg ind agg ind agg
-------------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
512 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1K 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2K 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4K 0 0 0 0 0 0 0 0 0 0 0 0 0 0
8K 0 0 0 0 0 0 0 0 0 0 0 0 0 0
16K 0 0 0 0 0 0 0 0 0 0 0 0 0 0
32K 183 0 0 0 90 0 0 0 0 0 0 0 0 0
64K 0 0 0 0 0 1 0 0 0 0 0 0 0 0
128K 0 0 0 0 0 0 0 0 0 0 0 0 0 0
256K 0 0 0 0 0 0 0 0 0 0 0 0 0 0
512K 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1M 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2M 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4M 0 0 0 0 0 0 0 0 0 0 0 0 0 0
8M 0 0 0 0 0 0 0 0 0 0 0 0 0 0
16M 0 0 0 0 0 0 0 0 0 0 0 0 0 0
----------------------------------------------------------------------------------------------------------------------------------------------