Keeping a backup vm up-to-date

Feb 14, 2021
41
2
13
68
Denmark
So, I have have Proxmox running with a VM on one node (node A), and twice each day I run PBS to make a backup of this VM. I runs very smoothly and fast, thanks a lot.

In case of problems with node A, I have node B ready to take over. I've tried to connect nodes A and B inside a cluster, adding one mode node for quorum. But this proved more complicated than I thought. I spent a lot of time and effort securing quorum for the cluster. Much of this is probably due to my lack of knowledge and systematics.

Instead, if anything goes wrong with the node A, my plan now is to recover the VM on node B. I've tested this, and it works smoothly, but also quite slow (around 4 hours). Even if the VM on node B is only a few days old, if I try to do a recover with the newest version of my backups on PBS, it seems that the recover process copies every byte from the PBS backup to node B.

Is there some way to just recover the changes? Tell PBS that "most of this VM is quite ok, but please make the few changes needed to make it fit with this backup"?


Any help appreciated,
Jesper Holck, Denmark
 
no, there is no "incremental restore" ;)

you can try "live restore", it allows you to start the VM while it is being restored (loading any data the guest requests first, on-deman)
 
Did you have a look at ZFS replication?
If you got a second node with enough storage to restore that VM there you could also use replication, where the VMs virtual disks then are incrementally synced between the nodes. Set the replication interval to for example 1 minute and when one of the nodes fails you will only lose that one minute of data since the last replication. You could then nearly instantainious start that VM on the other node as the virtual disk already exists there.
 
  • Like
Reactions: ubu
Did you have a look at ZFS replication?
If you got a second node with enough storage to restore that VM there you could also use replication, where the VMs virtual disks then are incrementally synced between the nodes. Set the replication interval to for example 1 minute and when one of the nodes fails you will only lose that one minute of data since the last replication. You could then nearly instantainious start that VM on the other node as the virtual disk already exists there.
Thanks - I will look into ZFS replication, this seems like a good idea.
 
I guess it would technically be possible to implement an incremental restore, at least for VMs, although it would be a bit more expensive than you (likely ;)) think, unless you want to keep the restore target VMs "running, but paused" 24/7.
 
Thanks to the suggestions from Dunuin I've looked into ZFS replication, and found a way to make it work - although not automatically, yet. I will shortly describe this here, hoping others may be inspired and follow up.
I have two Proxmox hosts, A and B. And I have VM 100 running on host A, working as mail and web server for a small organization. In case of failures or maintenance on host A, I would like to be able to switch over to B and have my VM running there. And I would like to do this without creating a cluster and having issues with quorum.

So, here is what I've done, from host A:
Code:
qm snapshot 100 tuesday-2110 --vmstate 0
zfs send rpool/data/vm-100-disk-0@tuesday-2110 | ssh 10.10.10.167 zfs recv rpool/data/vm-100-disk-0@tuesday-2110

This makes a local snapshot on host A and sends this to host B (at IP address 10.10.10.167). This can take quite a long time. You shouldn't use the "-v" option (for verbose), as this makes it much slower.

After this, vm-100-disk-0 is visible on host B. Next step is to create a VM on host B, identical to the one on host A. First, I create the new VM in the GUI on host B, and after this I edit the file /etc/pve/qemu-server/100.conf to make the new VM identical to the one on host A, and using the newly created disk. Now VM 100 should run as happily on host B as it did on host A.

In order to keep the two VMs in sync I regularly do this, from host A:
Code:
qm snapshot 100 wednesday-0819 --vmstate 0
zfs send -i rpool/data/vm-100-disk-0@tuesday-2110 rpool/data/vm-100-disk-0@wednesday-0819 | ssh 10.10.10.167 zfs recv -F rpool/data/vm-100-disk-0

The "-F" option tells zfs to overwrite possible changes on host B. This incremental send works MUCH faster than the initial send. For me, a few minutes, compared to the initial 12 hours.

If I switch over to host B and make the VM run from there, I can similarly make a snapshot on host B and transfer this to host A at regular intervals. This makes me able to easily switch back to host A, if I need it.
 
I've worked a bit more on this and written two scripts to help me keep two VMs in sync across different hosts. I know, the "official" way to do this is via a cluster, but I like this "handheld" approach more.

First, I've written a bash script to help me make snapshots of my VMs (called "ubuntu" and "sme") and transfer the snapshots between the two hosts (called "Sonja" and "Lasse").

Code:
#!/usr/bin/bash
if [ ! "$BASH_VERSION" ]
then
  echo "Please use bash for running this script"
  exit 0
fi
remote_ip="10.10.10.167"
remote_host="Sonja"
local_host="Lasse"
echo "Make new snapshot on $local_host and transfer it to $remote_host"
echo
ping -c 2 -q $remote_ip
if [ $? -ne 0 ]
then
  echo "No connection to $remote_host"
  exit 0
fi
echo "OK, $remote_host is alive"

read -p "Name of virtual server (sme,ubuntu): " virtual_server
if [ $virtual_server = "sme" ]
then
  declare -a disks=("rpool/data/vm-100-disk-0" "pool-mirror/vm-100-disk-0")
  vm=100
else
  if [ $virtual_server = "ubuntu" ]
  then
    declare -a disks=("rpool/data/vm-201-disk-0")
    vm=201
  else
    echo "$virtual_server not recognized"
    exit 0
  fi
fi

read -p "Name of new snapshot (e.g. \"torsdag-0915\"): " new_snapshot
qm snapshot $vm $new_snapshot --vmstate 0
if [ $? -ne 0 ]
then
  echo "Snapshot failed with return code $?"
  exit 0
fi
zfs list -r -t snapshot -o name,creation,used rpool | grep vm-$vm
echo "All ok, continuing with transfer"
read -p "Name of previous snapshot: " prev_snapshot
for base in "${disks[@]}"
do
  echo "Transferring $base to $remote_host"
  zfs send -i $base@$prev_snapshot $base@$new_snapshot | ssh $remote_ip zfs recv -F $base
  if [ $? -ne 0 ]
  then
    echo "Transfer failed with return code $?"
    exit 0
  fi
  echo "All ok, continuing..."
done
read -p "Old snapshot to delete: " old_snapshot
for base in "${disks[@]}"
do
  zfs destroy $base@$old_snapshot
done
zfs list -r -t snapshot -o name,creation,used rpool | grep vm-$vm
exit 0

When I do this, new snapshots are created on the remote host, but I also need to update the configuration files. For this, I've written a Perl script:
Code:
#!/usr/bin/perl
use warnings;

%special_entry = ("snaptime" => 1, "parent" => 1);

foreach $vm (100, 201) {
  open (NEW, ">", "$vm.conf.new") or die $!;

# First check entries in vm conf files

  open (FIL, "/etc/pve/nodes/lasse/qemu-server/$vm.conf");
  %content=();
  $version = "base";
  $comments = "";
  while ($line = <FIL>) {
    if ($line =~ /^#/) {
      $comments .= $line;
      next
    }
    chomp $line;
    next if ($line !~ /\W/);
    if ($line =~ /^\[/) {
      $version = $line;
      $version =~ s/^\[//;
      $version =~ s/\]$//;
    } else {
      ($param, $value) = split (': ', $line);
      $content{$version}{$param} = $value;
    }   
  }
  close (FIL);

# Check that all entries are alike

  foreach $version (keys %content) {
    foreach $param (keys %{$content{$version}}) {
      next if ( exists($special_entry{$param}));
      if ($content{$version}->{$param} ne $content{base}->{$param}) {
         print "Inconsistent conf-file:\n";
         print "$version: $param = $content{$version}->{$param}\n";
      }
    }
  }

# Examine zfs snapshots

  @snapshots = qx/zfs list -t snapshot -p -o name,creation/;
  @disks = ();
  %snaps_by_time = ();
  %snaps_by_name = ();
  foreach $snap (@snapshots) {
    next unless (index ($snap, $vm) != -1);
    next unless ($snap =~  /\d/);
    chomp $snap;
    ($zfsname, $time) = split(/\s+/, $snap);
    ($disk, $name) = split(/\@/, $zfsname);
  #  print "Disk: $disk, Name: $name, Time: $time\n";
    $snaps_by_name{$name} = $time;
    $snaps_by_time{$time} = $name;
    push (@disks, $disk);
  }

  print "\nExamining snapshots in qm conf-file for vm$vm...\n";
  foreach $version (keys %content) {
    next if ($version eq "base");
    unless (exists($snaps_by_name{$version})) {
      print "\n$version Snapshot not present in zfs!\n"
    }
  }

  print "\nExamining snapshots on zfs for vm$vm ...\n";   

  print NEW $comments;
  foreach $param (keys %{$content{base}}) {
    next if ( exists($special_entry{$param}));
    print NEW "$param: $content{base}->{$param}\n";
  }
  $maxtime = 0;
  foreach $time (keys %snaps_by_time) {
    if ($time > $maxtime) {
      $maxtime = $time
    }
  }
  print NEW "parent: $snaps_by_time{$maxtime}\n\n"; 

  $previous = "";
  foreach $time (sort keys %snaps_by_time) {
    $name = $snaps_by_time{$time};
    next if ($name eq $previous);
    add_entry($name, $previous);
    $previous = $name;
  }
  close (NEW);
}

sub add_entry {
  ($name, $previous) = @_;
  print NEW "[$name]\n";
  foreach $param (keys %{$content{base}}) {
    next if ( exists($special_entry{$param}));
    print NEW "$param: $content{base}->{$param}\n";
  }
  print NEW "parent: $previous \n";
  print NEW "snaptime: $snaps_by_name{$name}\n\n";
}

The script produces new .conf files for the two VMs, so that all existing snapshots are referenced in the .conf files.

If anyone can use or comment on these scripts, I'll be glad :)

Of course the scripts need to be adjusted to fit your own platforms.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!