Only Configuration saved to PBS

Cookies

New Member
Aug 24, 2021
2
0
1
37
So today a coworker was talking about backups and after checking out our system it looks like the actual data of the VM's is not being backed up, but only the configuration files or something along that line. I'm not exactly sure.

Logged into the PBS, I checked the size of our backup directory. The backup directory is a NFS mount point on a secondary dedicated backup device that then uploads a snapshot to the cloud.

Code:
root@pbs:/mnt/Cohesity/vm# df -h
Filesystem                                 Size  Used Avail Use% Mounted on
udev                                       967M     0  967M   0% /dev
tmpfs                                      199M   21M  179M  11% /run
/dev/mapper/pbs-root                        24G  2.6G   20G  12% /
tmpfs                                      994M     0  994M   0% /dev/shm
tmpfs                                      5.0M     0  5.0M   0% /run/lock
tmpfs                                      994M     0  994M   0% /sys/fs/cgroup
cohesity.domain.private:/Proxmox-Backup  204T  136T   69T  67% /mnt/Cohesity
tmpfs                                      199M     0  199M   0% /run/user/0
root@pbs:/mnt/Cohesity/vm# du -sh ./*
5.8M    ./100
8.6M    ./101
7.2M    ./102
15M     ./103
36M     ./104
15M     ./105
310M    ./106
57M     ./107
29M     ./108
3.7M    ./109
4.7M    ./110
1.3M    ./111
8.1M    ./112
38M     ./113
22M     ./114
22M     ./115
34M     ./116
34M     ./117
8.1M    ./118
46M     ./119
11M     ./120
8.9M    ./121





root@pbs:/mnt/Cohesity/vm/113/2021-08-11T07:00:09Z# du -sh ./*
1.0K    ./client.log.blob
2.0M    ./drive-scsi0.img.fidx
512     ./index.json.blob
512     ./qemu-server.conf.blob
root@pbs:/mnt/Cohesity/vm/113/2021-08-11T07:00:09Z#





As you can see, each VM is using less than 1GB of space ( in some cases only a MB or two )

vs on the actual storage of the VM's
Code:
root@vm3:/mnt/pve/Isilon/images# df -h
Filesystem                          Size  Used Avail Use% Mounted on
udev                                 32G     0   32G   0% /dev
tmpfs                               6.3G  666M  5.7G  11% /run
/dev/mapper/pve-root                 46G  3.6G   40G   9% /
tmpfs                                32G   60M   32G   1% /dev/shm
tmpfs                               5.0M     0  5.0M   0% /run/lock
tmpfs                                32G     0   32G   0% /sys/fs/cgroup
isilon.domain.private:/ifs/VMs   1.1P  890T  169T  85% /mnt/pve/Isilon
isilon.domain.private:/ifs/ISOs  1.1P  890T  169T  85% /mnt/pve/ISOs
/dev/fuse                            30M   40K   30M   1% /etc/pve
tmpfs                               6.3G     0  6.3G   0% /run/user/0
root@vm3:/mnt/pve/Isilon/images# du -sh ./*
4.2G    ./100
73G     ./101
3.5G    ./102
115G    ./103
287G    ./104
115G    ./105
538G    ./106
230G    ./107
230G    ./108
29G     ./109
8.0G    ./110
11G     ./111
69G     ./112
9.1G    ./113
9.7G    ./114
7.5G    ./115
395G    ./116
287G    ./117
69G     ./118
429G    ./119
34G     ./120
11G     ./121
3.2G    ./122
3.1G    ./123
root@vm3:/mnt/pve/Isilon/images#

Here the actual VM's sizes are significantly larger.

the backup seems to be going though, even on servers that have very little changes ( a new server we are impmeneting that is not in use yet ) and transfering data but nothing seems to be saved at all.

from output of manual backup
Code:
()
INFO: starting new backup job: vzdump 113 --remove 0 --storage PBS --mode snapshot --node vm1
INFO: Starting Backup of VM 113 (qemu)
INFO: Backup started at 2021-08-24 10:59:34
INFO: status = running
INFO: VM Name: deadline
INFO: include disk 'scsi0' 'Isilon:113/vm-113-disk-0.qcow2' 250G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/113/2021-08-24T17:59:34Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '70adecdd-2daf-4c5d-b2cf-a0ef013961bf'
INFO: resuming VM again
INFO: scsi0: dirty-bitmap status: OK (608.0 MiB of 250.0 GiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 608.0 MiB dirty of 250.0 GiB total
INFO:  34% (208.0 MiB of 608.0 MiB) in 3s, read: 69.3 MiB/s, write: 69.3 MiB/s
INFO:  65% (396.0 MiB of 608.0 MiB) in 6s, read: 62.7 MiB/s, write: 62.7 MiB/s
INFO:  99% (604.0 MiB of 608.0 MiB) in 9s, read: 69.3 MiB/s, write: 69.3 MiB/s
INFO: 100% (608.0 MiB of 608.0 MiB) in 12s, read: 1.3 MiB/s, write: 1.3 MiB/s
INFO: backup was done incrementally, reused 249.41 GiB (99%)
INFO: transferred 608.00 MiB in 29 seconds (21.0 MiB/s)
INFO: Finished Backup of VM 113 (00:00:30)
INFO: Backup finished at 2021-08-24 11:00:04
INFO: Backup job finished successfully
TASK OK


/var/log/daemon.log
Code:
Aug 24 11:30:07 pbs proxmox-backup-proxy[584]: starting new backup reader datastore 'Cohesity': "/mnt/Cohesity"
Aug 24 11:30:07 pbs proxmox-backup-proxy[584]: protocol upgrade done
Aug 24 11:30:07 pbs proxmox-backup-proxy[584]: GET /download
Aug 24 11:30:07 pbs proxmox-backup-proxy[584]: download "/mnt/Cohesity/vm/113/2021-08-11T07:00:09Z/index.json.blob"
Aug 24 11:30:07 pbs proxmox-backup-proxy[584]: GET /download
Aug 24 11:30:07 pbs proxmox-backup-proxy[584]: download "/mnt/Cohesity/vm/113/2021-08-11T07:00:09Z/qemu-server.conf.blob"
Aug 24 11:30:07 pbs proxmox-backup-proxy[584]: reader finished successfully
Aug 24 11:30:07 pbs proxmox-backup-proxy[584]: TASK OK
Aug 24 11:30:16 pbs proxmox-backup-proxy[584]: starting new backup on datastore 'Cohesity': "vm/113/2021-08-24T18:30:15Z"
Aug 24 11:30:16 pbs proxmox-backup-proxy[584]: download 'index.json.blob' from previous backup.
Aug 24 11:30:16 pbs proxmox-backup-proxy[584]: register chunks in 'drive-scsi0.img.fidx' from previous backup.
Aug 24 11:30:16 pbs proxmox-backup-proxy[584]: download 'drive-scsi0.img.fidx' from previous backup.
Aug 24 11:30:16 pbs proxmox-backup-proxy[584]: created new fixed index 1 ("vm/113/2021-08-24T18:30:15Z/drive-scsi0.img.fidx")
Aug 24 11:30:16 pbs proxmox-backup-proxy[584]: add blob "/mnt/Cohesity/vm/113/2021-08-24T18:30:15Z/qemu-server.conf.blob" (365 bytes, comp: 365)
Aug 24 11:30:20 pbs proxmox-backup-proxy[584]: Upload statistics for 'drive-scsi0.img.fidx'
Aug 24 11:30:20 pbs proxmox-backup-proxy[584]: UUID: 6971e8fd214e4c25a8a46bab11f67053
Aug 24 11:30:20 pbs proxmox-backup-proxy[584]: Checksum: 46063ba86982852902c5c5f9efbc5a3b0658407c748f5b49f5a331060bea16aa
Aug 24 11:30:20 pbs proxmox-backup-proxy[584]: Size: 176160768
Aug 24 11:30:20 pbs proxmox-backup-proxy[584]: Chunk count: 42
Aug 24 11:30:20 pbs proxmox-backup-proxy[584]: Upload size: 180355072 (102%)
Aug 24 11:30:20 pbs proxmox-backup-proxy[584]: Duplicates: 0+1 (2%)
Aug 24 11:30:20 pbs proxmox-backup-proxy[584]: Compression: 33%
Aug 24 11:30:20 pbs proxmox-backup-proxy[584]: successfully closed fixed index 1
Aug 24 11:30:20 pbs proxmox-backup-proxy[584]: add blob "/mnt/Cohesity/vm/113/2021-08-24T18:30:15Z/index.json.blob" (328 bytes, comp: 328)
Aug 24 11:30:20 pbs proxmox-backup-proxy[584]: successfully finished backup
Aug 24 11:30:20 pbs proxmox-backup-proxy[584]: backup finished successfully
Aug 24 11:30:20 pbs proxmox-backup-proxy[584]: TASK OK
Aug 24 11:30:21 pbs proxmox-backup-proxy[584]: Upload backup log to Cohesity/vm/113/2021-08-24T18:30:15Z/client.log.blob



Am I missing something or is there anywhere else I can look as it looks like the data is being sent to PBS, but nothing seem to be getting saved to the storage drive, leading to the inability to create a real backup.

Thanks!
 
Am I missing something or is there anywhere else I can look as it looks like the data is being sent to PBS, but nothing seem to be getting saved to the storage drive, leading to the inability to create a real backup.
Maybe you missed that the Proxmox Backup Server is using a CAS (Content Addressable Storage) which saves the actual backup data in a .chunks folder in the top level of a datastore path.

The backup directory saves small files directly (for them the CAS would not be worth it) and for the bigger data, for example the guest disks, it saves the list of chunks only in the fixed (.fidx) and dynamic (.didx) files. That allows that the actual data blocks can be reused by other backups, and thus taking less space in total and requiring less data to be sent over the network.

See: https://pbs.proxmox.com/docs/technical-overview.html for some more technical overview.

I mean, the simplest test is doing a restore of a backup to a new guest and check if it results in a working guest with the expected data. That's actually something that's recommended to do periodically, e.g., pick a few random backups and restore them once a week or month or so, as only tested backups are worth their salt :)

root@pbs:/mnt/Cohesity/vm# du -sh ./*
Rather try:
Bash:
du -hd1 /mnt/Cohesity/ | sort -h

To see where the actual biggest disk usage is located.
 
  • Like
Reactions: ZipTX
Wow, How did i miss that,

Thank you! looks like my data is safe and I was freaking out over nothing.. I guess working 14hr days with zero time to check things is not always a good thing

Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!