PVE need the option to backup all x Minutes

vikozo

Renowned Member
May 4, 2014
781
31
93
suisse
www.wombat.ch
hello

i thinks on the PVE site for doing backups there should be the option to do it every 15 Minutes and not only ons a day. As soon you connect to a PBS!

have a nice day
vinc

=============
Proxmox Backup Server 0.8-9 Beta
running as vPBS with
> 2vCPU (sockets) set to Host
>4GiB vRAM |
>Installation 15G &
>Data's : 500G vDisk & 750G with WriteBack vDisk

installed with ISO file BETA-1.iso
 
Replication with ZFS is what you what in such a setup. Backing up every 15 minutes is maybe feasible for a very small number of VM with small disks, but it will not work otherwise (so in the main setup case of PVE with a lot of VMs and a lot of storage).
Imaging you have a 1 TB disk and a storage that can read with 1 GB/s, you will not be able to read the disk in 15 minutes, so you cannot back it up in that time either.
 
Replication with ZFS is what you what in such a setup. Backing up every 15 minutes is maybe feasible for a very small number of VM with small disks, but it will not work otherwise (so in the main setup case of PVE with a lot of VMs and a lot of storage).
Imaging you have a 1 TB disk and a storage that can read with 1 GB/s, you will not be able to read the disk in 15 minutes, so you cannot back it up in that time either.

@LnxBil > I think he means since we got pbs we can actually do this with running VMs with dirty-bitmap as soon as the vm has run one backup since last reboot.

@vikozo you can do this with PVE if you edit the backup cron manually and set it to */15 on minutes
 
Hmm, haven't read anything about this. Where did you get your information from?

I cannot recall where on proxmox related forum, but it's mentioned on qemu and i saw it in the logs. The example below takes about 32 mins the first time, and if i keep the vm running it only backups the changes due to qemu maps the changes.

https://www.qemu.org/docs/master/interop/bitmaps.html

INFO: Starting Backup of VM 350 (qemu)
INFO: Backup started at 2020-07-26 15:20:02
INFO: status = running
INFO: VM Name: michael
INFO: include disk 'scsi0' 'ceph1_vm:vm-350-disk-0' 16G
INFO: include disk 'scsi2' 'ceph1_vm:vm-350-disk-2' 500G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/350/2020-07-26T13:20:02Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task 'fddec4ef-ff44-4b7c-b47d-d7b908e01ae4'
INFO: resuming VM again
INFO: using fast incremental mode (dirty-bitmap), 256.0 MiB dirty of 516.0 GiB total
INFO: status: 100% (256.0 MiB of 256.0 MiB), duration 2, read: 128.0 MiB/s, write: 128.0 MiB/s
INFO: backup was done incrementally, reused 515.76 GiB (99%)
INFO: transferred 256.00 MiB in 2 seconds (128.0 MiB/s)
INFO: Finished Backup of VM 350 (00:00:02)
INFO: Backup finished at 2020-07-26 15:20:04
INFO: Backup job finished successfully
TASK OK
 
One *very* important note that bit me while testing some things... any VM that *isn't* running *will* do a full backup every time, rather than benefiting from that qemu feature, which can cause your entire backup task to effectively 'hang' waiting for the one VM for as much as 20+ minutes depending on size (all my testing's on spinning disks, and I have some real data sitting there too), missing some of those backup cycles. I believe cron itself *should* handle that by just silently skipping the overlapped tasks, at least.
 
One *very* important note that bit me while testing some things... any VM that *isn't* running *will* do a full backup every time, rather than benefiting from that qemu feature, which can cause your entire backup task to effectively 'hang' waiting for the one VM for as much as 20+ minutes depending on size (all my testing's on spinning disks, and I have some real data sitting there too), missing some of those backup cycles. I believe cron itself *should* handle that by just silently skipping the overlapped tasks, at least.

That's strange, are you sure it's a full? When i backup stopped VMs, they kind of look like they are backing up full but the speed (MB/s) is much more than my backup network (1Gbit) can do, and in the end is reporting incremental backup. For me it seems it does incremental backups for the stopped VMs. It's much faster after the first backup thus incremental. See below how much is reused in the end:

INFO: starting new backup job: vzdump 233 --mode snapshot --storage backupserver3 --node bony21 --remove 0
INFO: Starting Backup of VM 233 (qemu)
INFO: Backup started at 2020-07-26 15:37:53
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: ns2-old
INFO: include disk 'scsi0' 'ceph1_vm:vm-233-disk-0' 10G
INFO: creating Proxmox Backup Server archive 'vm/233/2020-07-26T13:37:53Z'
INFO: starting kvm to execute backup task
INFO: started backup task '8f2eab92-6e5d-426a-9983-0a4f7eb8d849'
INFO: status: 12% (1.2 GiB of 10.0 GiB), duration 3, read: 410.7 MiB/s, write: 410.7 MiB/s
INFO: status: 22% (2.3 GiB of 10.0 GiB), duration 6, read: 366.7 MiB/s, write: 366.7 MiB/s
INFO: status: 35% (3.5 GiB of 10.0 GiB), duration 9, read: 420.0 MiB/s, write: 420.0 MiB/s
INFO: status: 45% (4.6 GiB of 10.0 GiB), duration 12, read: 370.7 MiB/s, write: 370.7 MiB/s
INFO: status: 57% (5.7 GiB of 10.0 GiB), duration 15, read: 388.0 MiB/s, write: 388.0 MiB/s
INFO: status: 69% (6.9 GiB of 10.0 GiB), duration 18, read: 409.3 MiB/s, write: 409.3 MiB/s
INFO: status: 83% (8.3 GiB of 10.0 GiB), duration 21, read: 480.0 MiB/s, write: 480.0 MiB/s
INFO: status: 96% (9.6 GiB of 10.0 GiB), duration 24, read: 440.0 MiB/s, write: 440.0 MiB/s
INFO: status: 100% (10.0 GiB of 10.0 GiB), duration 25, read: 384.0 MiB/s, write: 384.0 MiB/s
INFO: backup was done incrementally, reused 9.83 GiB (98%)
INFO: transferred 10.00 GiB in 25 seconds (409.6 MiB/s)
INFO: stopping kvm after backup task
INFO: Finished Backup of VM 233 (00:00:28)
INFO: Backup finished at 2020-07-26 15:38:21
INFO: Backup job finished successfully
TASK OK
 
While it may well not be *transferring* the whole image, it's at least eating the penalty for *reading* the whole thing. As mentioned, all spinning disks in my cluster here (at least for VM side storage, PVE's running off an SSD in some, and an SD card in others). In the example I tripped over, it's *definitely* not network bandwidth, since that VM's on the same host that's running PBS, but that may also be causing some I/O delay in parallel read/write work there (same zfs array backs both). Here's an example of a secondary pass with the VM running, benefitting from the dirty-bitmap feature:

Code:
INFO: Starting Backup of VM 8021 (qemu)
INFO: Backup started at 2020-07-26 21:00:29
INFO: status = running
INFO: VM Name: foreman01
INFO: include disk 'scsi0' 'data:vm-8021-disk-0' 200G
INFO: include disk 'efidisk0' 'data:vm-8021-disk-1' 128K
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/8021/2020-07-27T01:00:29Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task 'e3f01aca-07ad-4431-bbb2-610e869f4a06'
INFO: resuming VM again
INFO: using fast incremental mode (dirty-bitmap), 160.0 MiB dirty of 200.0 GiB total
INFO: status: 100% (160.0 MiB of 160.0 MiB), duration 3, read: 53.3 MiB/s, write: 53.3 MiB/s
INFO: backup was done incrementally, reused 199.84 GiB (99%)
INFO: transferred 160.00 MiB in 3 seconds (53.3 MiB/s)
INFO: Finished Backup of VM 8021 (00:00:05)
INFO: Backup finished at 2020-07-26 21:00:34

And here's one with the VM stopped. It's still incremental, so I was mistaken there, but clearly read the whole disk back over:

Code:
INFO: Starting Backup of VM 8021 (qemu)
INFO: Backup started at 2020-07-26 12:38:03
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: foreman01
INFO: include disk 'scsi0' 'data:vm-8021-disk-0' 200G
INFO: include disk 'efidisk0' 'data:vm-8021-disk-1' 128K
INFO: creating Proxmox Backup Server archive 'vm/8021/2020-07-26T16:38:03Z'
INFO: starting kvm to execute backup task
INFO: started backup task '419a1a10-ac25-46c1-bbe9-867c1afb5a8a'
INFO: status: 0% (204.1 MiB of 200.0 GiB), duration 4, read: 51.0 MiB/s, write: 51.0 MiB/s
INFO: status: 1% (2.0 GiB of 200.0 GiB), duration 21, read: 110.8 MiB/s, write: 110.8 MiB/s

<snipped for brevity, speeds average around 140 MiB/s R/W for most of the run>

INFO: status: 99% (198.1 GiB of 200.0 GiB), duration 1455, read: 145.1 MiB/s, write: 145.1 MiB/s
INFO: status: 100% (200.0 GiB of 200.0 GiB), duration 1469, read: 136.6 MiB/s, write: 136.6 MiB/s
INFO: backup was done incrementally, reused 196.50 GiB (98%)
INFO: transferred 200.00 GiB in 1469 seconds (139.4 MiB/s)
INFO: stopping kvm after backup task
INFO: Finished Backup of VM 8021 (00:24:33)
INFO: Backup finished at 2020-07-26 13:02:36
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!