Method to Copy PBS Backups Elsewhere?

theprez1980

New Member
Feb 25, 2024
18
0
1
I'm in the process of redoing my proxmox cluster setup and PBS server. I don't want to lose the VM backed up in PBS as I'll need all that to restore to the new instances of PVE.

Is it possible to connect a USB Drive or rclone or some other method to save what's in PBS' datastore and be able to place it back on a fresh install of PBS?

Said differently, I want to backup what's in PBS in order to reinstall PBS along with the previously backed up VMs so they can be restored to PVE.

Thanks
 
When you look at how PBS works, you will understand that there are no individual backup "files" you can directly get out of it. Also, if you could, the sheer volume would probably kill you because of the deduplication factor.

You can, however, transfer a whole datastore as a data disk or the subdirectory / zfs where the PBS datastore resides. Essentially, what is in there is essentially the metadata for each backup and the chunks that make up the data. You can restore that data to another PBS instance and recreate the datastore in place.

You will have to use the same username that you did the backups with or change the owners of the backup groups.
 
When you look at how PBS works, you will understand that there are no individual backup "files" you can directly get out of it. Also, if you could, the sheer volume would probably kill you because of the deduplication factor.

You can, however, transfer a whole datastore as a data disk or the subdirectory / zfs where the PBS datastore resides. Essentially, what is in there is essentially the metadata for each backup and the chunks that make up the data. You can restore that data to another PBS instance and recreate the datastore in place.

You will have to use the same username that you did the backups with or change the owners of the backup groups.
Thanks for the detailed reply, that makes sense.

So no matter how I copy the /zfs directory either through rclone or some other method, and reuse the same User ID - I should be good to go then?
 
Adding to what meyergru has said, you could also attached a drive to your existing PBS server and then add it as a new repository and do a local sync from one datastore to the new one. It accomplishes the same thing but you'd be working in the GUI. When you setup the new PBS server you can add that datastore again. One of the benefits of doing the sync is you can choose how deep you want to sync (ie everything vs only the last x)

This post shows how that is done by editing the /etc/proxmox-backup/datastore.cfg
https://forum.proxmox.com/threads/datastore-recovery.72835/#post-325491


*note* be mindful of tabs/spaces in the datastore.cfg file. Its picky and you need to make sure you're using tabs and spaces in the right place or it will complain that it can't parse the file. This apply whether you are copying the files over manually or using PBS to sync them.
 
Last edited:
  • Like
Reactions: Johannes S
I think so - I only did it by moving a whole disk containing a datastore, because my PBS is a VM.

By "user", I mean the PBS user. All Linux files seem to be owned by "backup:backup". The logical PBS user is kept in a file named "owner" in the backup group directory, I imagine you could also change its content.

And yes, you could as well mount your USB disk, create a new datastore and then sync it. That would also work via network if the new PBS instance can reach the old instance. In that case, you would not even need a transfer medium.
 
Last edited:
  • Like
Reactions: Johannes S
Hi,

the easiest & most integrated method would be to use sync jobs - although that requires a secondary PBS instance.

Even more interesting to you would probably removable datastores, which are currently in the works.
These would allow you to add e.g. a removable disk conntected via USB as datastore and create a sync job to that.
 
  • Like
Reactions: Johannes S
Looking forward to see how you will do the implementation for that.
But I've to say, I'm happy with my own setup for our RDX drives already. :)
 
I would really like to see some of the work-in-progress mentioned on the roadmap https://pbs.proxmox.com/wiki/index.php/Roadmap and also https://pve.proxmox.com/wiki/Roadmap.

Both roadmaps should be a little bit more vivid. Remove the stroke through entries and nothing substantial is left...
Nothing substantial left?
Then when is " Support (tape-like) syncing to S3/Object storage types " expected for Proxmox Backup Server? Usualy a roadmap comes with timeline...
Futher: I suppose that this would also make it possible to use a Truenas share as (offsite) storage? .... Which is now only possible using rsync (which is not recommended) or by the use of a second PBS instance, for example as a VM running under TrueNAS (Scale). Both is giving an extra layer of complexity in a simple 3-2-1 backup strategy for PBS.
 
Even more interesting to you would probably removable datastores, which are currently in the works.
These would allow you to add e.g. a removable disk conntected via USB as datastore and create a sync job to that.
Please, add an option to removable datastores to automatically start a sync job + GC job + verify job when the disk is plugged in, so no need to add udev rules, write scripts or manual intervention just to click a few buttons. Also, an automatic disconnect of the drive once all the jobs are done would be wonderful ;)
 
  • Like
Reactions: Johannes S
Thanks for the detailed reply, that makes sense.

So no matter how I copy the /zfs directory either through rclone or some other method, and reuse the same User ID - I should be good to go then?
rclone seems to break PBS datastores thus I wouldn't do this:
https://forum.proxmox.com/threads/datastore-synced-with-rclone-broken.154709/
https://forum.proxmox.com/threads/pbs-appears-not-to-write-to-disk.157751/

ZFS send/receive should work but I would throughly test that before trusting my backups with it. The issue is that PBS heavily relies on an attributed called atime which records the last access time of a file or directory. Depending how the data is transferred from one place to another this might get broken, resulting in damaged backups. rclone doesn't seem to handle it well (since rclone can't know about any PBS specifics) while PBS native sync job do.

I personally would use a removable datastore with your external disk in your case. For a offsite backup a cheap vserver or pbs cloud backup provider like tuxis.nl is propably your best bet. Or a mixed approach where you use PBS for local backups and Proxmox VEs native vzdump format on cloud storage for offsite backup. I do something similiar which I discussed here:
https://forum.proxmox.com/threads/backup-pbs-to-cloud-provider-with-incremental-chunks.157965/
 
  • Like
Reactions: UdoB
rclone seems to break PBS datastores thus I wouldn't do this:
https://forum.proxmox.com/threads/datastore-synced-with-rclone-broken.154709/
https://forum.proxmox.com/threads/pbs-appears-not-to-write-to-disk.157751/

ZFS send/receive should work but I would throughly test that before trusting my backups with it. The issue is that PBS heavily relies on an attributed called atime which records the last access time of a file or directory. Depending how the data is transferred from one place to another this might get broken, resulting in damaged backups. rclone doesn't seem to handle it well (since rclone can't know about any PBS specifics) while PBS native sync job do.

I personally would use a removable datastore with your external disk in your case. For a offsite backup a cheap vserver or pbs cloud backup provider like tuxis.nl is propably your best bet. Or a mixed approach where you use PBS for local backups and Proxmox VEs native vzdump format on cloud storage for offsite backup. I do something similiar which I discussed here:
https://forum.proxmox.com/threads/backup-pbs-to-cloud-provider-with-incremental-chunks.157965/
Hi.
I will test another approach.
I will install a windows and use the gdrive official client for.
Then, share the gdrive using smb protocol and access by proxmox backup server (mounting like smb share and add it to datastore).
I think the performance and verification of backups will be better and more faster. The windows will manager the cache.
Of corse, windows will be lock and use antivirus, not accessible by rdp or exposed on internet.
 
Last edited:
Then, share the gdrive using smb protocol and access by proxmox backup server (mounting like smb share and add it to datastore).

Wow, that's a cool - and brave - attempt!

Do yourself a favor: prepare everything as you describe. Then (on the PBS and inside the storage folder used for PBS-storage) run a common benchmark similar like this:

Code:
fio --name=randrw --ioengine=libaio --direct=1 --sync=1 --rw=randrw --bs=3M --numjobs=1 --iodepth=1 --size=20G --runtime=120 --time_based --rwmixread=75

This is just an example; depending on "iodepth", "bs" etc the result will vary drastically. "size" should be larger than Ram/Cache. The actual "bs" is max 4 MiB, so testing with three MiB seems reasonable. (Not the usual 4k.)

Please note that PBS needs IOPS, IOPS and... IOPS. The recommended storage is mentioned here: https://pbs.proxmox.com/docs/installation.html#recommended-server-system-requirements

Please post the output of fio in [code]...[/code]-tags :-)

----
Question to other readers: what is a good parameter set for fio-for-PBS? The above is just a dirty first idea.
 
Last edited:
  • Like
Reactions: Johannes S
Wow, that's a cool - and brave - attempt!

Okay, I will present myself as being "brave" too:
...run a common benchmark similar like this:

For those who like crazy setups: this is a virtual machine on an eight years old Synology DiskStation, using NFS(!) from the same device for the PBS datastore! It has a rotating rust configured as the Synology specific "SHR"/Raid6 with a simple (and cheap) read-Cache on SSD. There is no speedy Metadata-Storage in this device. So many bad design decisions...!

My command from above:
Code:
randrw: (g=0): rw=randrw, bs=(R) 3072KiB-3072KiB, (W) 3072KiB-3072KiB, (T) 3072KiB-3072KiB, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
randrw: Laying out IO file (1 file / 20480MiB)
Jobs: 1 (f=1): [m(1)][100.0%][w=6144KiB/s][w=2 IOPS][eta 00m:00s]               
randrw: (groupid=0, jobs=1): err= 0: pid=1297: Tue Jun 24 21:14:31 2025
  read: IOPS=6, BW=19.5MiB/s (20.5MB/s)(2343MiB/120026msec)
    slat (usec): min=345, max=27121, avg=1564.20, stdev=2264.46
    clat (msec): min=6, max=386, avg=58.06, stdev=39.68
     lat (msec): min=28, max=387, avg=59.63, stdev=39.54
    clat percentiles (msec):
     |  1.00th=[   22],  5.00th=[   27], 10.00th=[   28], 20.00th=[   29],
     | 30.00th=[   32], 40.00th=[   44], 50.00th=[   50], 60.00th=[   55],
     | 70.00th=[   63], 80.00th=[   74], 90.00th=[  102], 95.00th=[  133],
     | 99.00th=[  222], 99.50th=[  271], 99.90th=[  388], 99.95th=[  388],
     | 99.99th=[  388]
   bw (  KiB/s): min= 6059, max=79395, per=100.00%, avg=22447.78, stdev=12823.95, samples=212
   iops        : min=    1, max=   25, avg= 7.17, stdev= 4.16, samples=212
  write: IOPS=2, BW=7269KiB/s (7443kB/s)(852MiB/120026msec); 0 zone resets
    slat (usec): min=914, max=80387, avg=2699.90, stdev=6007.89
    clat (msec): min=106, max=773, avg=255.86, stdev=105.74
     lat (msec): min=109, max=775, avg=258.56, stdev=105.72
    clat percentiles (msec):
     |  1.00th=[  118],  5.00th=[  140], 10.00th=[  150], 20.00th=[  169],
     | 30.00th=[  194], 40.00th=[  226], 50.00th=[  245], 60.00th=[  259],
     | 70.00th=[  279], 80.00th=[  305], 90.00th=[  363], 95.00th=[  468],
     | 99.00th=[  667], 99.50th=[  693], 99.90th=[  776], 99.95th=[  776],
     | 99.99th=[  776]
   bw (  KiB/s): min= 5976, max=18468, per=100.00%, avg=8757.81, stdev=3335.60, samples=199
   iops        : min=    1, max=    6, avg= 2.72, stdev= 1.13, samples=199
  lat (msec)   : 10=0.09%, 20=0.28%, 50=37.28%, 100=27.98%, 250=21.88%
  lat (msec)   : 500=11.36%, 750=1.03%, 1000=0.09%
  cpu          : usr=0.44%, sys=1.33%, ctx=1304, majf=0, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=781,284,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=19.5MiB/s (20.5MB/s), 19.5MiB/s-19.5MiB/s (20.5MB/s-20.5MB/s), io=2343MiB (2457MB), run=120026-120026msec
  WRITE: bw=7269KiB/s (7443kB/s), 7269KiB/s-7269KiB/s (7443kB/s-7443kB/s), io=852MiB (893MB), run=120026-120026msec

Yes, this is NOT RECOMMENDED at all. But in my specific setup... it works.

If you do crazy things likes this: test these experimental setups thoroughly. For PBS this means run run restore-tests - not only once during the initial setup but every few months. PBS will slow down during time until it generates timeouts, which may make it difficult to read back the actual data. (That said... my own test for this setup is overdue...)

It is worth to mention that this is a tertiary PBS in my Homelab, not my primary one! ;-)
 
  • Like
Reactions: Johannes S
Wow, that's a cool - and brave - attempt!

Do yourself a favor: prepare everything as you describe. Then (on the PBS and inside the storage folder used for PBS-storage) run a common benchmark similar like this:

Code:
fio --name=randrw --ioengine=libaio --direct=1 --sync=1 --rw=randrw --bs=3M --numjobs=1 --iodepth=1 --size=20G --runtime=120 --time_based --rwmixread=75

This is just an example; depending on "iodepth", "bs" etc the result will vary drastically. "size" should be larger than Ram/Cache. The actual "bs" is max 4 MiB, so testing with three MiB seems reasonable. (Not the usual 4k.)

Please note that PBS needs IOPS, IOPS and... IOPS. The recommended storage is mentioned here: https://pbs.proxmox.com/docs/installation.html#recommended-server-system-requirements

Please post the output of fio in [code]...[/code]-tags :-)

----
Question to other readers: what is a good parameter set for fio-for-PBS? The above is just a dirty first idea.
fio --name=randrw --ioengine=libaio --direct=1 --sync=1 --rw=randrw --bs=3M --numjobs=1 --iodepth=1 --size=20G --runtime=120 --time_based --rwmixread=75
randrw: (g=0): rw=randrw, bs=(R) 3072KiB-3072KiB, (W) 3072KiB-3072KiB, (T) 3072KiB-3072KiB, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
randrw: Laying out IO file (1 file / 20480MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=51.0MiB/s,w=12.0MiB/s][r=17,w=4 IOPS][eta 00m:00s]
randrw: (groupid=0, jobs=1): err= 0: pid=17888: Wed Jun 25 23:32:10 2025
read: IOPS=13, BW=40.3MiB/s (42.3MB/s)(4836MiB/120012msec)
slat (usec): min=96, max=559, avg=124.43, stdev=25.73
clat (msec): min=39, max=3352, avg=59.77, stdev=138.94
lat (msec): min=39, max=3352, avg=59.89, stdev=138.94
clat percentiles (msec):
| 1.00th=[ 42], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45],
| 30.00th=[ 46], 40.00th=[ 47], 50.00th=[ 47], 60.00th=[ 48],
| 70.00th=[ 50], 80.00th=[ 51], 90.00th=[ 55], 95.00th=[ 59],
| 99.00th=[ 169], 99.50th=[ 793], 99.90th=[ 2265], 99.95th=[ 3339],
| 99.99th=[ 3339]
bw ( KiB/s): min= 6144, max=73728, per=100.00%, avg=46079.65, stdev=12051.62, samples=214
iops : min= 2, max= 24, avg=15.00, stdev= 3.93, samples=214
write: IOPS=4, BW=13.7MiB/s (14.4MB/s)(1650MiB/120012msec); 0 zone resets
slat (usec): min=1642, max=2291, avg=1833.89, stdev=102.56
clat (msec): min=34, max=117, avg=38.54, stdev= 6.01
lat (msec): min=36, max=118, avg=40.37, stdev= 6.00
clat percentiles (msec):
| 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37],
| 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 38], 60.00th=[ 39],
| 70.00th=[ 39], 80.00th=[ 40], 90.00th=[ 41], 95.00th=[ 43],
| 99.00th=[ 58], 99.50th=[ 99], 99.90th=[ 117], 99.95th=[ 117],
| 99.99th=[ 117]
bw ( KiB/s): min= 6144, max=43008, per=100.00%, avg=16834.25, stdev=8036.03, samples=200
iops : min= 2, max= 14, avg= 5.47, stdev= 2.61, samples=200
lat (msec) : 50=82.65%, 100=15.63%, 250=1.11%, 500=0.14%, 750=0.05%
lat (msec) : 1000=0.09%, 2000=0.19%, >=2000=0.14%
cpu : usr=0.15%, sys=0.93%, ctx=2177, majf=0, minf=10
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=1612,550,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
READ: bw=40.3MiB/s (42.3MB/s), 40.3MiB/s-40.3MiB/s (42.3MB/s-42.3MB/s), io=4836MiB (5071MB), run=120012-120012msec
WRITE: bw=13.7MiB/s (14.4MB/s), 13.7MiB/s-13.7MiB/s (14.4MB/s-14.4MB/s), io=1650MiB (1730MB), run=120012-120012msec

notes:
The google drive uses the local disk like cache.
My drive grow in size, then, slowly down in size after send the file online.
The CPU with google drive was 4 cores. 40% usage.
The traffit on ethernet in 900mbps+, so, was the speed off switch and network card.
The local disk is important be bigger.

Windows 2019 essentials - 1809.
Google one - 30TB.
When restart computer, the shared is lost. You must login again. If use Administrator account, add to the share permissions and use him on credentials to mount on linux. This will free you geting a error permissions message on linux (after mount smb).
Command to mount:
mount -t cifs "//192.168.xxx.xx/g" /mnt/smb/gdrive_backup -o credentials=/etc/smbcredentials/backup.cred,domain=WORKGROUP,iocharset=utf8,vers=3.0,noserverino
 
Last edited:
fio --name=randrw --ioengine=libaio --direct=1 --sync=1 --rw=randrw --bs=3M --numjobs=1 --iodepth=1 --size=20G --runtime=120 --time_based --rwmixread=75
randrw: (g=0): rw=randrw, bs=(R) 3072KiB-3072KiB, (W) 3072KiB-3072KiB, (T) 3072KiB-3072KiB, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
randrw: Laying out IO file (1 file / 20480MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=51.0MiB/s,w=12.0MiB/s][r=17,w=4 IOPS][eta 00m:00s]
randrw: (groupid=0, jobs=1): err= 0: pid=17888: Wed Jun 25 23:32:10 2025
read: IOPS=13, BW=40.3MiB/s (42.3MB/s)(4836MiB/120012msec)
slat (usec): min=96, max=559, avg=124.43, stdev=25.73
clat (msec): min=39, max=3352, avg=59.77, stdev=138.94
lat (msec): min=39, max=3352, avg=59.89, stdev=138.94
clat percentiles (msec):
| 1.00th=[ 42], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45],
| 30.00th=[ 46], 40.00th=[ 47], 50.00th=[ 47], 60.00th=[ 48],
| 70.00th=[ 50], 80.00th=[ 51], 90.00th=[ 55], 95.00th=[ 59],
| 99.00th=[ 169], 99.50th=[ 793], 99.90th=[ 2265], 99.95th=[ 3339],
| 99.99th=[ 3339]
bw ( KiB/s): min= 6144, max=73728, per=100.00%, avg=46079.65, stdev=12051.62, samples=214
iops : min= 2, max= 24, avg=15.00, stdev= 3.93, samples=214
write: IOPS=4, BW=13.7MiB/s (14.4MB/s)(1650MiB/120012msec); 0 zone resets
slat (usec): min=1642, max=2291, avg=1833.89, stdev=102.56
clat (msec): min=34, max=117, avg=38.54, stdev= 6.01
lat (msec): min=36, max=118, avg=40.37, stdev= 6.00
clat percentiles (msec):
| 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37],
| 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 38], 60.00th=[ 39],
| 70.00th=[ 39], 80.00th=[ 40], 90.00th=[ 41], 95.00th=[ 43],
| 99.00th=[ 58], 99.50th=[ 99], 99.90th=[ 117], 99.95th=[ 117],
| 99.99th=[ 117]
bw ( KiB/s): min= 6144, max=43008, per=100.00%, avg=16834.25, stdev=8036.03, samples=200
iops : min= 2, max= 14, avg= 5.47, stdev= 2.61, samples=200
lat (msec) : 50=82.65%, 100=15.63%, 250=1.11%, 500=0.14%, 750=0.05%
lat (msec) : 1000=0.09%, 2000=0.19%, >=2000=0.14%
cpu : usr=0.15%, sys=0.93%, ctx=2177, majf=0, minf=10
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=1612,550,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
READ: bw=40.3MiB/s (42.3MB/s), 40.3MiB/s-40.3MiB/s (42.3MB/s-42.3MB/s), io=4836MiB (5071MB), run=120012-120012msec
WRITE: bw=13.7MiB/s (14.4MB/s), 13.7MiB/s-13.7MiB/s (14.4MB/s-14.4MB/s), io=1650MiB (1730MB), run=120012-120012msec

notes:
The google drive uses the local disk like cache.
My drive grow in size, then, slowly down in size after send the file online.
The CPU with google drive was 4 cores. 40% usage.
The traffit on ethernet in 900mbps+, so, was the speed off switch and network card.
The local disk is important be bigger.

Windows 2019 essentials - 1809.
Google one - 30TB.
When restart computer, the shared is lost. You must login again. If use Administrator account, add to the share permissions and use him on credentials to mount on linux. This will free you geting a error permissions message on linux (after mount smb).
Command to mount:
mount -t cifs "//192.168.xxx.xx/g" /mnt/smb/gdrive_backup -o credentials=/etc/smbcredentials/backup.cred,domain=WORKGROUP,iocharset=utf8,vers=3.0,noserverino

Second day of tests.

mount -t cifs "//192.168.xxx.xx/g" /mnt/smb/gdrive_backup -o credentials=/etc/smbcredentials/backup.cred,domain=WORKGROUP,iocharset=utf8,vers=3.11,noserverino

fio --name=randrw --ioengine=libaio --direct=1 --sync=1 --rw=randrw --bs=3M --numjobs=1 --iodepth=1 --size=20G --runtim
e=120 --time_based --rwmixread=75
randrw: (g=0): rw=randrw, bs=(R) 3072KiB-3072KiB, (W) 3072KiB-3072KiB, (T) 3072KiB-3072KiB, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
randrw: Laying out IO file (1 file / 20480MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=27.0MiB/s,w=9225KiB/s][r=9,w=3 IOPS][eta 00m:00s]
randrw: (groupid=0, jobs=1): err= 0: pid=19503: Thu Jun 26 09:35:25 2025
read: IOPS=13, BW=41.6MiB/s (43.6MB/s)(4992MiB/120023msec)
slat (usec): min=93, max=693, avg=132.46, stdev=33.65
clat (msec): min=36, max=2585, avg=57.31, stdev=134.86
lat (msec): min=36, max=2585, avg=57.44, stdev=134.86
clat percentiles (msec):
| 1.00th=[ 37], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 43],
| 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46],
| 70.00th=[ 47], 80.00th=[ 49], 90.00th=[ 52], 95.00th=[ 56],
| 99.00th=[ 211], 99.50th=[ 860], 99.90th=[ 2400], 99.95th=[ 2601],
| 99.99th=[ 2601]
bw ( KiB/s): min= 6144, max=73728, per=100.00%, avg=47853.97, stdev=13825.56, samples=213
iops : min= 2, max= 24, avg=15.58, stdev= 4.50, samples=213
write: IOPS=4, BW=14.1MiB/s (14.8MB/s)(1698MiB/120023msec); 0 zone resets
slat (usec): min=1649, max=3049, avg=1892.63, stdev=160.48
clat (msec): min=34, max=245, avg=38.73, stdev=12.96
lat (msec): min=36, max=247, avg=40.62, stdev=12.95
clat percentiles (msec):
| 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36],
| 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 39],
| 70.00th=[ 40], 80.00th=[ 40], 90.00th=[ 41], 95.00th=[ 42],
| 99.00th=[ 67], 99.50th=[ 88], 99.90th=[ 247], 99.95th=[ 247],
| 99.99th=[ 247]
bw ( KiB/s): min= 6144, max=43008, per=100.00%, avg=17093.70, stdev=8843.35, samples=202
iops : min= 2, max= 14, avg= 5.56, stdev= 2.88, samples=202
lat (msec) : 50=89.96%, 100=8.74%, 250=0.67%, 500=0.13%, 750=0.04%
lat (msec) : 1000=0.13%, 2000=0.13%, >=2000=0.18%
cpu : usr=0.16%, sys=1.02%, ctx=2245, majf=0, minf=10
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=1664,566,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
READ: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=4992MiB (5234MB), run=120023-120023msec
WRITE: bw=14.1MiB/s (14.8MB/s), 14.1MiB/s-14.1MiB/s (14.8MB/s-14.8MB/s), io=1698MiB (1780MB), run=120023-120023msec

Is a liitle improvement in performance using 3.11 protocol for SMB.

In attach is the explanation about sync.
The google drive uses the local disk like cache, so, if the vm is stored in ssd or nvme you will have best results.
The syncronization is async, so, the files is uploaded after stored on local disk. The ram usage is minimum.
I will now test in PBS, with comparation of the same backups in local storage (ssd) and remote. Tasks like verify, prune and etc.
 

Attachments

  • print.png
    print.png
    14.1 KB · Views: 1