[SOLVED] LXC backup speed drops towards the end of backup.

Oct 6, 2019
41
4
13
46
Hello!

I'm backing up an LXC with rootfs and 2 extra ZFS mount points.

Code:
vpool/subvol-120-disk-0              909M  7.11G      909M  /vpool/subvol-120-disk-0
vpool/subvol-120-disk-1-mysql-data  14.2G  25.8G     14.2G  /vpool/subvol-120-disk-1-mysql-data
vpool/subvol-120-disk-2-mysql-log    445M  79.6G      445M  /vpool/subvol-120-disk-2-mysql-log

Since the backup was unusually slow I tried the backup without compression and found judging by the tar file size that it quickly backs up all the data (tar file was 15G pretty quickly) and then spent a whole lot of time doing "something". I noticed 50M/s disk reads on the source disk, tar using 8% of the 4 CPU cores at the same time whilst disk writes to the backup disk were pretty much non-existent.

Might that delay have something to do with the --sparse option of tar? Or do additional mount points have a negative effect?

Is anyone else experiencing similar?
 
A little more info.

At first the backup runs nicely for about 7 minutes:

Code:
PID   PRIO USER       DISK READ  DISK WRITE  SWAPIN     IO> COMMAND
18182 be/7 100000    122.70 M/s    0.00 B/s  0.00 % 82.02 % tar cpf - --totals --one-file-system -p --sparse --numeric-owner --ac~p/?* --exclude=./var/run/?*.pid ./ ./var/lib/mysql ./var/lib/mysql-log
18177 be/7 root        0.00 B/s   23.02 M/s  0.00 %  0.00 % lzop

After 7 minutes the backup tar.dat file is 7017M at which point it all slows down:

Code:
18182 be/7 100000     68.61 M/s    0.00 B/s  0.00 % 91.95 % tar cpf - --totals --one-file-system -p --sparse --numeric-owner --ac~p/?* --exclude=./var/run/?*.pid ./ ./var/lib/mysql ./var/lib/mysql-log
18177 be/7 root        0.00 B/s  263.93 K/s  0.00 %  0.00 % lzop

The final compressed tar.lzo is ready after 60 minutes with a size 7425M.
 
Hi,

When you make the tar file, most of the data are present in the zfs cache. But on the second step (compress of the tar file) the data from this tar file are not in the zfs cache.


Good luck / Bafta
 
... without compression is also compression ;) Take a look at command line when is running a backup "without compression"

Good luck / Bafta.
 
... without compression is also compression ;) Take a look at command line when is running a backup "without compression"

Good luck / Bafta.

I have no idea what tar does there. The resulting tar file size equals the size of all the backed up mount points.
 
Tar will create a single large file who contain all the files/folders from that VM/CT


Yes, this should be normal!

Indeed, I mean I have no idea what it's compressing when it has already created a file with the size of all data being backed up.
 
I've narrowed it down to tar becoming extremely slow when reading MySQL InnoDB log files:

tar 12083 100000 6r REG 0,98 268435456 129 /mnt/vzsnap0/var/lib/mysql-log/ib_logfile0

I can copy the 256MB file to another disk in about a second, but tar just keeps reading them.

I compared 'zfs get all' parameters of the /mysql-log dataset with the LCX root and they're identical.

In short tar keeps reading the 256MB file at 50MB/s for tens of minutes.
 
I've solved it - the culprit was primarycache=metadata on the MySQL datasets. With primarycache=all the backup completes in 2,5 minutes at 120M/s instead of 2.5MiB/s.
 
I've solved it - the culprit was primarycache=metadata on the MySQL datasets. With primarycache=all the backup completes in 2,5 minutes at 120M/s instead of 2.5MiB/s.

Hi,

You solve one problem, but you create a new one. Now yor mysql server VM will cache the same data(double cache) at the zfs level, but also at the mysql/VM level. It will be better to to setup primarycache=all before backup will start and after backup is finish setup primarycache=metadata. The vzdump is able to use a hook script. You can find an example in the documentation directory (vzdump-hook-script.pl).

Good luck /Bafta
 
Hi,

You solve one problem, but you create a new one. Now yor mysql server VM will cache the same data(double cache) at the zfs level, but also at the mysql/VM level. It will be better to to setup primarycache=all before backup will start and after backup is finish setup primarycache=metadata. The vzdump is able to use a hook script. You can find an example in the documentation directory (vzdump-hook-script.pl).

Good luck /Bafta

Thanks, that's exactly what I was planning to look into in the evening. :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!