[SOLVED] LXC Backup size

cortes_

New Member
Apr 10, 2020
3
0
1
37
Hey,
I have the following problem. I have set up a daily backup of LXC containers to external NetApp storage (connected via NFS)

There are 4 containers on the Proxmox host. With one of one of the containers, there is a problem that when I get a huge backup.
The container contains 1 disk with a size of 200Gb from a local ZFS.
When I look at the backup logs, I see that its size is from 440 gigabytes and every day this number grows by about 20 gigabytes.
A container backup takes about 7 hours, which is of course unacceptable.

What steps can I follow to understand what is the reason?

Proxmox version - 6.4-6
Backup mode - snapshot, compression - zstd
zpool in RAIDZ1

Bash:
zfs list -o space rpool/data/subvol-105-disk-0
NAME                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
rpool/data/subvol-105-disk-0  89.3G   111G        0B    111G             0B         0B
Bash:
zpool get all | grep -E 'ashift|comp|dedup'
rpool  dedupratio                     1.00x                          -
rpool  ashift                         12                             local
rpool  feature@lz4_compress           active                         local
rpool  feature@zstd_compress          disabled                       local

Backup logs
Bash:
vzdump-lxc-105-2021_06_06-00_07_06.log:2021-06-06 09:04:41 INFO: Total bytes written: 470392391680 (439GiB, 14MiB/s)
vzdump-lxc-105-2021_06_07-00_07_30.log:2021-06-07 12:53:33 INFO: Total bytes written: 495304099840 (462GiB, 11MiB/s)
vzdump-lxc-105-2021_06_08-00_07_34.log:2021-06-08 14:28:56 INFO: Total bytes written: 546285977600 (509GiB, 11MiB/s)

Bash:
cat /etc/pve/lxc/105.conf
arch: amd64
cores: 4
hostname: lxc.example.com
memory: 8192
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.10.0.1,hwaddr=42:A8:C3:8D:4F:80,ip=10.10.0.110/24,tag=40,type=veth
onboot: 1
ostype: ubuntu
parent: vzdump
protection: 1
rootfs: local-zfs:subvol-105-disk-0,size=200G
swap: 0
unprivileged: 1
 
Last edited:
do you maybe have some files that are very compressable?

what does 'zfs get all zrpool/data/subvol-105-disk-0' say?
 
do you maybe have some files that are very compressable?

what does 'zfs get all zrpool/data/subvol-105-disk-0' say?
Thanks for reply.

Bash:
NAME                          PROPERTY              VALUE                          SOURCE
rpool/data/subvol-105-disk-0  type                  filesystem                     -
rpool/data/subvol-105-disk-0  creation              Wed May 26 13:22 2021          -
rpool/data/subvol-105-disk-0  used                  126G                           -
rpool/data/subvol-105-disk-0  available             73.6G                          -
rpool/data/subvol-105-disk-0  referenced            126G                           -
rpool/data/subvol-105-disk-0  compressratio         17.73x                         -
rpool/data/subvol-105-disk-0  mounted               yes                            -
rpool/data/subvol-105-disk-0  quota                 none                           default
rpool/data/subvol-105-disk-0  reservation           none                           default
rpool/data/subvol-105-disk-0  recordsize            128K                           default
rpool/data/subvol-105-disk-0  mountpoint            /rpool/data/subvol-105-disk-0  default
rpool/data/subvol-105-disk-0  sharenfs              off                            default
rpool/data/subvol-105-disk-0  checksum              on                             default
rpool/data/subvol-105-disk-0  compression           lz4                            inherited from rpool
rpool/data/subvol-105-disk-0  atime                 off                            inherited from rpool
rpool/data/subvol-105-disk-0  devices               on                             default
rpool/data/subvol-105-disk-0  exec                  on                             default
rpool/data/subvol-105-disk-0  setuid                on                             default
rpool/data/subvol-105-disk-0  readonly              off                            default
rpool/data/subvol-105-disk-0  zoned                 off                            default
rpool/data/subvol-105-disk-0  snapdir               hidden                         default
rpool/data/subvol-105-disk-0  aclmode               discard                        default
rpool/data/subvol-105-disk-0  aclinherit            restricted                     default
rpool/data/subvol-105-disk-0  createtxg             519367                         -
rpool/data/subvol-105-disk-0  canmount              on                             default
rpool/data/subvol-105-disk-0  xattr                 sa                             local
rpool/data/subvol-105-disk-0  copies                1                              default
rpool/data/subvol-105-disk-0  version               5                              -
rpool/data/subvol-105-disk-0  utf8only              off                            -
rpool/data/subvol-105-disk-0  normalization         none                           -
rpool/data/subvol-105-disk-0  casesensitivity       sensitive                      -
rpool/data/subvol-105-disk-0  vscan                 off                            default
rpool/data/subvol-105-disk-0  nbmand                off                            default
rpool/data/subvol-105-disk-0  sharesmb              off                            default
rpool/data/subvol-105-disk-0  refquota              200G                           local
rpool/data/subvol-105-disk-0  refreservation        none                           default
rpool/data/subvol-105-disk-0  guid                  14220453677571795555           -
rpool/data/subvol-105-disk-0  primarycache          all                            default
rpool/data/subvol-105-disk-0  secondarycache        all                            default
rpool/data/subvol-105-disk-0  usedbysnapshots       0B                             -
rpool/data/subvol-105-disk-0  usedbydataset         126G                           -
rpool/data/subvol-105-disk-0  usedbychildren        0B                             -
rpool/data/subvol-105-disk-0  usedbyrefreservation  0B                             -
rpool/data/subvol-105-disk-0  logbias               latency                        default
rpool/data/subvol-105-disk-0  objsetid              1284                           -
rpool/data/subvol-105-disk-0  dedup                 off                            default
rpool/data/subvol-105-disk-0  mlslabel              none                           default
rpool/data/subvol-105-disk-0  sync                  standard                       inherited from rpool
rpool/data/subvol-105-disk-0  dnodesize             legacy                         default
rpool/data/subvol-105-disk-0  refcompressratio      17.73x                         -
rpool/data/subvol-105-disk-0  written               126G                           -
rpool/data/subvol-105-disk-0  logicalused           1.13T                          -
rpool/data/subvol-105-disk-0  logicalreferenced     1.13T                          -
rpool/data/subvol-105-disk-0  volmode               default                        default
rpool/data/subvol-105-disk-0  filesystem_limit      none                           default
rpool/data/subvol-105-disk-0  snapshot_limit        none                           default
rpool/data/subvol-105-disk-0  filesystem_count      none                           default
rpool/data/subvol-105-disk-0  snapshot_count        none                           default
rpool/data/subvol-105-disk-0  snapdev               hidden                         default
rpool/data/subvol-105-disk-0  acltype               posix                          local
rpool/data/subvol-105-disk-0  context               none                           default
rpool/data/subvol-105-disk-0  fscontext             none                           default
rpool/data/subvol-105-disk-0  defcontext            none                           default
rpool/data/subvol-105-disk-0  rootcontext           none                           default
rpool/data/subvol-105-disk-0  relatime              off                            default
rpool/data/subvol-105-disk-0  redundant_metadata    all                            default
rpool/data/subvol-105-disk-0  overlay               on                             default
rpool/data/subvol-105-disk-0  encryption            off                            default
rpool/data/subvol-105-disk-0  keylocation           none                           default
rpool/data/subvol-105-disk-0  keyformat             none                           default
rpool/data/subvol-105-disk-0  pbkdf2iters           0                              default
rpool/data/subvol-105-disk-0  special_small_blocks  0                              default
 
The issue has been resolved. Inside the container, I found millions of files that were over 200 GB in size.
These were the software irq metrics from collectd

thanks for the help