[SOLVED] Initial transfer of big datastore over slow connection

bockhold

Active Member
Jan 13, 2018
26
2
43
40
Hello,

I've got a PBS pbs01 backing up some VMs adding up to something around 2 TB. This datastore I would like to sync to another off-site PBS pbs02. Unfortunately the network link between both PBSs comes across some relatively slow DSL-lines (~20 MBit/s).

Is it therefore possible to i.e. rsync the datastore on pbs01 to an external drive, again rsync it to the disk on pbs02 and start from there? The changes in the VMs aren't that big, the differences should be possible to sync over he slow connection in acceptable time.

Thanks!
 
Great, thanks!

During the rsync I should probably disable the services proxmox-backup and proxmox-backup-proxy to "freeze" the state of the datastore, correct?
 
I've got a followup question here: this worked once and now I tried it again. After copying the datastore contents (and chowning the copied directories on disk to backup:backup) I see e.g. "VM 8 Groups, 0 Snapshots" in the web GUI. PBS recognizes the different VMs but does not see the individual backups. On disk I can see the multiple chunk directories containing differently sized files.

What did I do wrong in this case? Why doesn't PBS see the snapshots?

Thanks!
 
do the group dirs contain snapshot dirs with an index.json.blob and other metadata?

it should look (roughly) like this (.chunks dir omitted, that contains the actual chunks):
Code:
/backup/ct
├── 100023
│   ├── 2020-10-05T06:47:48Z
│   │   ├── catalog.pcat1.didx
│   │   ├── client.log.blob
│   │   ├── fw.conf.blob
│   │   ├── index.json.blob
│   │   ├── pct.conf.blob
│   │   └── root.pxar.didx
│   ├── 2020-10-05T06:49:39Z
│   │   ├── catalog.pcat1.didx
│   │   ├── client.log.blob
│   │   ├── fw.conf.blob
│   │   ├── index.json.blob
│   │   ├── pct.conf.blob
│   │   └── root.pxar.didx
│   ├──  [...]
│   │   [...]
│   └── owner
├── [...]
/backup/vm
├── 110
│   ├── 2021-04-26T12:03:52Z
│   │   ├── client.log.blob
│   │   ├── drive-ide1.img.fidx
│   │   ├── drive-sata1.img.fidx
│   │   ├── drive-scsi0.img.fidx
│   │   ├── drive-scsi1.img.fidx
│   │   ├── drive-virtio1.img.fidx
│   │   ├── index.json.blob
│   │   └── qemu-server.conf.blob
│   ├── 2021-04-26T12:08:12Z
│   │   ├── client.log.blob
│   │   ├── drive-ide1.img.fidx
│   │   ├── drive-sata1.img.fidx
│   │   ├── drive-scsi0.img.fidx
│   │   ├── drive-scsi1.img.fidx
│   │   ├── drive-virtio1.img.fidx
│   │   ├── index.json.blob
│   │   └── qemu-server.conf.blob
│   ├──  [...]
│   │   [...]
│   └── owner
├── [...]
 
Thanks for your reply!

Yes, there is metadata:

Bash:
# ls -alR datastore-as

datastore-as:
total 1048
drwxr-xr-x  4 backup backup    4096 Dec 29 08:00 .
drwxr-xr-x  5 root   root      4096 Jun 22  2021 ..
drwxrwxrwx  1 backup backup 1056768 Jun  3  2021 .chunks
-rw-r--r--  1 backup backup     310 Dec 29 08:00 .gc-status
-rw-rw-rw-  1 backup backup       0 May 27  2021 .lock
drwxrwxrwx 10 backup backup    4096 May 27  2021 vm

[...]

datastore-as/vm/110/2021-12-22T231412Z:
total 280
drwxrwxrwx  2 backup backup   4096 Dec 23 17:27 .
drwxrwxrwx 34 backup backup   4096 Dec 26 03:01 ..
-rw-rw-rw-  1 backup backup    651 Dec 23 00:14 client.log.blob
-rw-rw-rw-  1 backup backup 266240 Dec 23 00:14 drive-scsi0.img.fidx
-rw-rw-rw-  1 backup backup    630 Dec 23 17:27 index.json.blob
-rw-rw-rw-  1 backup backup    360 Dec 23 00:14 qemu-server.conf.blob

datastore-as/vm/110/2021-12-23T233042Z:
total 280
drwxrwxrwx  2 backup backup   4096 Dec 24 18:52 .
drwxrwxrwx 34 backup backup   4096 Dec 26 03:01 ..
-rw-rw-rw-  1 backup backup    693 Dec 24 00:31 client.log.blob
-rw-rw-rw-  1 backup backup 266240 Dec 24 00:31 drive-scsi0.img.fidx
-rw-rw-rw-  1 backup backup    630 Dec 24 18:52 index.json.blob
-rw-rw-rw-  1 backup backup    360 Dec 24 00:30 qemu-server.conf.blob

datastore-as/vm/110/2021-12-24T235945Z:
total 280
drwxrwxrwx  2 backup backup   4096 Dec 25 01:01 .
drwxrwxrwx 34 backup backup   4096 Dec 26 03:01 ..
-rw-rw-rw-  1 backup backup    883 Dec 25 01:01 client.log.blob
-rw-rw-rw-  1 backup backup 266240 Dec 25 01:01 drive-scsi0.img.fidx
-rw-rw-rw-  1 backup backup    509 Dec 25 01:01 index.json.blob
-rw-rw-rw-  1 backup backup    360 Dec 25 00:59 qemu-server.conf.blob

And this probably shows the problem: the colons in the directory names seem to be mashed up as some weird special character?!
 
yeah seems like something went wrong when copying. you could try fixing (via renaming) one snapshot, then see if it verifies correctly - if it does, rename all of them (probably best by writing some script that fixes the mangled names automatically). how did you copy the datastore tree? what file system does your copy use under the hood?
 
Thanks again for your help!

This command helps to clean the directory names: mmv -r '2021-*T??*00*0?Z' '2021-#1T#2#3:00:0#6Z'. After that I do see all the snapshots in the GUI again. When trying to verify a selected snapshot I get SKIPPED: verify datastore-as:vm/100/2021-12-05T23:00:02Z (recently verified) - how can I force a verify run?

I copied using rsync: rsync -aHAXP /source /target. Source was a NFS-mounted share from a QNAP NAS, target is a local drive with ext4.
 
I forced a verify-run now by configuring a verify job with "skip verified: no". Works.

And I found a better way to rename directories with bogus characters:
Bash:
for dir in *; do
    mv "${dir}" $(echo "${dir}" | sed -e 's/[^A-Za-z0-9._-]/:/g')
done
 
great that you could fix it up - but still wondering where those characters came from in the first place ;)
 
I do, too. Especially as I mounted the shares in the first place via NFS to PBS 1 where the backups were created and now to PBS 2 to copy the contents to the local drive - doing nothing different. Unfortunately I currently don't have the time to do a full investigation... :(
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!