Backup lxc and mount point

newton21890

New Member
Sep 5, 2024
12
0
1
Hello Guys,

I have a question for you.

I need to back up a container with a mount point of 1TB.To complete the backup, I have to set the process in stop mode and it will take at least 6 hours to complete.

I have recently updated PVE and PBS to the latest release and chose metadata in mode detection.

Can you help me improve the performance of this backup process? I need to complete it in less time.

I trust your expertise and appreciate any assistance.

Thanks.
 
Hello Guys,

I have a question for you.

I need to back up a container with a mount point of 1TB.To complete the backup, I have to set the process in stop mode and it will take at least 6 hours to complete.

I have recently updated PVE and PBS to the latest release and chose metadata in mode detection.

Can you help me improve the performance of this backup process? I need to complete it in less time.

I trust your expertise and appreciate any assistance.

Thanks.
Hi,
have you selected the change detection mode metadata for the backup job in question as described in the docs https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_ct_change_detection_mode?

Note that the speedup can only occur after the change detection mode has been set to metadata and a first full backup run has been performed.
 
  • Like
Reactions: Johannes S
Hello Chris,
Of course, as I mentioned earlier, I have chosen metadata as the detection mode.In this configuration, I've been running since November 11th.
 
Last edited:
And what is the exact issue? Can you share a backup task log? How frequently does the data change in the LXC? It could be that your backup speed is limited by storage or network speed. Please share the output of a proxmox-backup-client benchmark --repository <your-pbs-repository>.
 
I am afraid you are limited by your hardware.

What hardware are you running on (PVE as well as PBS are of interest)?
Did you run the benchmark command on the PVE host or the PBS?
Is the PVE host and the PBS host on the same network?

Please do share also the backup task log in question as requested.
 
Hello Chris,

this is the link to download the log file : https://limewire.com/d/577d44ff-e52...8#17b-lG1oRkJNojwrpN65fSMqtHe_WMmIUmtpa2JEGgU

I have a cluster with 3 nodes.
The PBS VM runs here:

1737009148977.png

and this is the VM

1737009196125.png

The datastore is an NFS share directory from my Synology NAS connected via a 10Gb Ethernet connection.

I run the benchmark inside PBS Shell. When I try to run the benchmark inside the PVE host, I get this error:Error: Error trying to connect: Error connecting to
https://localhost:8007/ - tcp connection error: Connection refused (OS error: 111)
 
So from the PBS backup task log we can see:
Code:
2025-01-14T05:39:09+01:00: Size: 690591772198
2025-01-14T05:39:09+01:00: Chunk count: 174032
2025-01-14T05:39:09+01:00: Upload size: 8373749619 (1%)
2025-01-14T05:39:09+01:00: Duplicates: 172044+20 (98%)

This means that only about 8.7 GiB of data has been newly uploaded, the rest was already known to the server. That is good, but it does not tell if the client needed to re-chunk the data or if the change detection mode metadata could reuse known chunks as well.

To investigate that further, please provide the following outputs from the PVE host (best in code tags or attachment directly in the forum, not via an external upload provider):
  • PVE backup task log for the container in question
  • pct config <VMID> --current for the LXC you are backing up
  • qm config <VMID> --current for the VM on which the PBS is located on.
  • cat /etc/pve/jobs.cfg
  • cat /etc/pve/storage.cfg
Again, the benchmark results show that you are rather hardware limited. But you are also not using your CPUs hardware capabilities in the PBS VM. Please set the CPU type to host, as that should help to significantly increase the VMs performance.
 
PVE backup task log for the container in question
I don't know how to retrieve these logs

LXC
Code:
pct config 212
arch: amd64
cores: 6
features: nesting=1
hostname: srv-proto-app
memory: 16384
mp0: nas-ba02:212/vm-212-disk-2.raw,mp=/srv-proto-app-data,backup=1,size=1000G
mp1: /mnt/backup-wam,mp=/mnt/backup-wam
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.10.1,hwaddr=BC:24:11:B4:2B:90,ip=192.168.10.212/21,type=veth
ostype: centos
rootfs: nas-ba02:212/vm-212-disk-0.raw,size=80G
startup: order=2
swap: 512
unprivileged: 1

VM PBS
Code:
qm config 100
agent: 1
boot: order=scsi0;net0
cores: 8
memory: 16384
meta: creation-qemu=7.1.0,ctime=1675957680
name: ProxmoxBackup
net0: virtio=E6:21:40:35:A0:E7,bridge=vmbr0
net1: virtio=BC:24:11:C5:18:C9,bridge=vmbr1,mtu=9000
numa: 0
ostype: l26
scsi0: lun-nas_ba03:vm-100-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=11e65001-ecb2-47ef-8297-5506b1febd69
sockets: 1
unused0: ProxmoxDataNasBA02:100/vm-100-disk-0.qcow2
unused1: ProxmoxDataNasBA02:100/vm-100-disk-0.raw
vmgenid: fbca758d-966e-4fbd-bf68-5e28f0a3ce89


Code:
cat /etc/pve/jobs.cfg
vzdump: backup-bf2f18b2-db92
        comment VM
        schedule 21:00
        compress zstd
        enabled 1
        fleecing 0
        mailnotification always
        mode snapshot
        notes-template {{guestname}}
        prune-backups keep-daily=2,keep-last=3,keep-monthly=3,keep-weekly=4,keep-yearly=2
        repeat-missed 1
        storage pbs-nas
        vmid 105,100

vzdump: backup-c9abac1c-4d9d
        comment Protocollo
        schedule 22:00
        enabled 1
        fleecing 0
        mailnotification always
        mode stop
        notes-template {{guestname}}
        pbs-change-detection-mode metadata
        prune-backups keep-daily=2,keep-hourly=1,keep-last=5,keep-monthly=2,keep-weekly=3,keep-yearly=3
        repeat-missed 1
        storage pbs-nas
        vmid 211,212

vzdump: backup-18fe16ca-aa15
        comment Network
        schedule 00:01
        compress zstd
        enabled 1
        fleecing 0
        mode suspend
        notes-template {{guestname}}
        pbs-change-detection-mode metadata
        prune-backups keep-daily=2,keep-hourly=1,keep-last=5,keep-monthly=3,keep-weekly=3,keep-yearly=3
        storage pbs-nas
        vmid 400

Code:
cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

iscsi: iscsi-nas_ba03-100
        portal 10.11.100.8
        target iqn.2000-01.com.synology:nas-ba03.default-target.3b00dbaeeaf
        content none

iscsi: iscsi-nas_ba03-101
        portal 10.11.101.9
        target iqn.2000-01.com.synology:nas-ba03.default-target.3b00dbaeeaf
        content none

lvm: lun-nas_ba03
        vgname iscsi_proxmox
        content rootdir,images
        saferemove 0
        shared 1

pbs: pbs-nas
        datastore NSF
        server 192.168.12.81
        content backup
        fingerprint 71:eb:1e:15:e2:6c:e9:40:52:9e:14:91:29:a2:5e:e2:5f:88:80:d4:bd:d2:41:48:85:ff:ec:5c:8f:44:be:92
        prune-backups keep-all=1
        username root@pam

But you are also not using your CPUs hardware capabilities in the PBS VM. Please set the CPU type to host, as that should help to significantly increase the VMs performance.
I know that if I chose a host with the same CPU architecture, I would have better performance, but in the case of migration to a node in the cluster, the VM would crash because, unfortunately, the other nodes have different CPUs.
 
I don't know how to retrieve these logs
Just open the backup task in PVE's task log and click on download.

I know that if I chose a host with the same CPU architecture, I would have better performance, but in the case of migration to a node in the cluster, the VM would crash because, unfortunately, the other nodes have different CPUs.
Well, than you will be severely limited in performance... The recommended way would be to setup the PBS on a dedicated host with fast local storage, not even run it in a VM. Also, backing the datastore by a NFS share will reduce performance, if the NFS cannot provide the required IOPS for the datastore to operate.
 
Just open the backup task in PVE's task log and click on download.
I did it, but unfortunately, there are no logs of the backups made
1737023620269.png

Well, than you will be severely limited in performance... The recommended way would be to setup the PBS on a dedicated host with fast local storage, not even run it in a VM. Also, backing the datastore by a NFS share will reduce performance, if the NFS cannot provide the required IOPS for the datastore to operate.
Currently, the only storage available are two Synology NAS units (those listed), and one of them is connected via iSCSI to the cluster with 2 10GbE network cards in multipath configuration. However, I'm currently only using that connection for storing VMs and containers
 
As stated, your main bottleneck is your TLS and AES encryption speed...
 
As stated:
Please set the CPU type to host, as that should help to significantly increase the VMs performance.
You will have to either set the CPU type to host, or as recommended setup a dedicated host for the PBS.
 
As stated:

You will have to either set the CPU type to host, or as recommended setup a dedicated host for the PBS.
I have a second cluster with this node currently not in use, could it be used if I converted it into PBS?

1737025979804.png

I need to add additional hard drives, though.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!