Backup Issue

RobertusIT

Member
Jul 13, 2021
68
0
11
38
Hi

I have an issue when i make backup for my VM and lxc.

From backup log:
Code:
INFO: starting new backup job: vzdump 100 101 102 103 104 105 106 107 110 127 200 --mode snapshot --mailnotification always --storage USB500GB --prune-backups 'keep-last=1,keep-weekly=1' --compress zstd --quiet 1 --notes-template '{{cluster}}, {{guestname}}, {{node}}, {{vmid}}'
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started at 2023-03-16 03:00:01
INFO: status = running
INFO: VM Name: HOME-ASSISTANT
INFO: include disk 'sata0' 'local-lvm:vm-100-disk-1' 32G
INFO: include disk 'efidisk0' 'local-lvm:vm-100-disk-0' 4M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/mnt/USB500GB/dump/vzdump-qemu-100-2023_03_16-03_00_01.vma.zst'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task 'f5c0eca3-b6a2-495a-aad6-d38692039271'
INFO: resuming VM again
INFO:   1% (576.4 MiB of 32.0 GiB) in 3s, read: 192.1 MiB/s, write: 171.7 MiB/s
INFO:   5% (1.7 GiB of 32.0 GiB) in 6s, read: 376.2 MiB/s, write: 159.8 MiB/s
INFO:   7% (2.3 GiB of 32.0 GiB) in 9s, read: 218.3 MiB/s, write: 165.9 MiB/s
INFO:   9% (3.0 GiB of 32.0 GiB) in 12s, read: 253.8 MiB/s, write: 157.8 MiB/s
INFO:  11% (3.7 GiB of 32.0 GiB) in 15s, read: 223.5 MiB/s, write: 184.4 MiB/s
INFO:  14% (4.5 GiB of 32.0 GiB) in 18s, read: 280.3 MiB/s, write: 167.5 MiB/s
INFO:  15% (5.1 GiB of 32.0 GiB) in 21s, read: 189.7 MiB/s, write: 114.9 MiB/s
INFO:  20% (6.6 GiB of 32.0 GiB) in 24s, read: 502.7 MiB/s, write: 135.5 MiB/s
INFO:  22% (7.1 GiB of 32.0 GiB) in 27s, read: 186.1 MiB/s, write: 145.2 MiB/s
INFO:  23% (7.6 GiB of 32.0 GiB) in 30s, read: 177.6 MiB/s, write: 168.9 MiB/s
INFO:  26% (8.6 GiB of 32.0 GiB) in 33s, read: 332.3 MiB/s, write: 170.1 MiB/s
INFO:  28% (9.3 GiB of 32.0 GiB) in 36s, read: 234.4 MiB/s, write: 165.4 MiB/s
INFO:  34% (11.0 GiB of 32.0 GiB) in 39s, read: 601.8 MiB/s, write: 153.4 MiB/s
INFO:  37% (12.2 GiB of 32.0 GiB) in 42s, read: 379.4 MiB/s, write: 147.4 MiB/s
INFO:  44% (14.2 GiB of 32.0 GiB) in 45s, read: 698.4 MiB/s, write: 143.6 MiB/s
INFO:  46% (15.0 GiB of 32.0 GiB) in 48s, read: 261.0 MiB/s, write: 166.5 MiB/s
INFO:  49% (15.9 GiB of 32.0 GiB) in 51s, read: 321.4 MiB/s, write: 176.4 MiB/s
INFO:  51% (16.4 GiB of 32.0 GiB) in 54s, read: 152.2 MiB/s, write: 118.7 MiB/s
INFO:  52% (16.9 GiB of 32.0 GiB) in 57s, read: 170.4 MiB/s, write: 125.6 MiB/s
INFO:  57% (18.3 GiB of 32.0 GiB) in 1m, read: 503.0 MiB/s, write: 164.1 MiB/s
INFO:  60% (19.2 GiB of 32.0 GiB) in 1m 3s, read: 312.6 MiB/s, write: 189.2 MiB/s
INFO:  64% (20.6 GiB of 32.0 GiB) in 1m 6s, read: 455.7 MiB/s, write: 167.0 MiB/s
INFO:  66% (21.3 GiB of 32.0 GiB) in 1m 9s, read: 252.4 MiB/s, write: 162.4 MiB/s
INFO:  67% (21.7 GiB of 32.0 GiB) in 1m 12s, read: 139.0 MiB/s, write: 111.7 MiB/s
INFO:  69% (22.3 GiB of 32.0 GiB) in 1m 15s, read: 206.6 MiB/s, write: 172.6 MiB/s
INFO:  72% (23.0 GiB of 32.0 GiB) in 1m 18s, read: 244.1 MiB/s, write: 172.9 MiB/s
INFO:  74% (23.7 GiB of 32.0 GiB) in 1m 21s, read: 238.0 MiB/s, write: 163.7 MiB/s
INFO:  79% (25.3 GiB of 32.0 GiB) in 1m 24s, read: 546.2 MiB/s, write: 150.1 MiB/s
INFO:  86% (27.8 GiB of 32.0 GiB) in 1m 27s, read: 845.8 MiB/s, write: 107.0 MiB/s
INFO:  89% (28.6 GiB of 32.0 GiB) in 1m 30s, read: 264.6 MiB/s, write: 147.9 MiB/s
INFO:  91% (29.4 GiB of 32.0 GiB) in 1m 33s, read: 268.2 MiB/s, write: 127.7 MiB/s
INFO:  94% (30.4 GiB of 32.0 GiB) in 1m 36s, read: 339.5 MiB/s, write: 121.7 MiB/s
INFO:  97% (31.1 GiB of 32.0 GiB) in 1m 39s, read: 232.0 MiB/s, write: 169.3 MiB/s
INFO: 100% (32.0 GiB of 32.0 GiB) in 1m 42s, read: 323.2 MiB/s, write: 108.6 MiB/s
INFO: backup is sparse: 16.84 GiB (52%) total zero data
INFO: transferred 32.00 GiB in 102 seconds (321.3 MiB/s)
INFO: archive file size: 7.12GB
INFO: adding notes to backup
INFO: prune older backups with retention: keep-last=1, keep-weekly=1
INFO: removing backup 'USB500GB:backup/vzdump-qemu-100-2023_03_15-03_00_04.vma.zst'
INFO: pruned 1 backup(s) not covered by keep-retention policy
INFO: Finished Backup of VM 100 (00:01:54)
INFO: Backup finished at 2023-03-16 03:01:55
INFO: Starting Backup of VM 101 (lxc)
INFO: Backup started at 2023-03-16 03:01:55
INFO: status = running
INFO: CT Name: FRIGATE
INFO: including mount point rootfs ('/') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Logical volume "snap_vm-101-disk-0_vzdump" created.
  WARNING: Sum of all thin volume sizes (<194.02 GiB) exceeds the size of thin pool pve/data and the amount of free space in volume group (<16.00 GiB).
INFO: creating vzdump archive '/mnt/USB500GB/dump/vzdump-lxc-101-2023_03_16-03_01_55.tar.zst'
INFO: Total bytes written: 4831795200 (4.5GiB, 82MiB/s)
INFO: archive file size: 1.62GB
INFO: adding notes to backup
INFO: prune older backups with retention: keep-last=1, keep-weekly=1
INFO: removing backup 'USB500GB:backup/vzdump-lxc-101-2023_03_15-03_01_58.tar.zst'
INFO: pruned 1 backup(s) not covered by keep-retention policy
INFO: cleanup temporary 'vzdump' snapshot
  Logical volume "snap_vm-101-disk-0_vzdump" successfully removed
INFO: Finished Backup of VM 101 (00:00:59)
INFO: Backup finished at 2023-03-16 03:02:54
INFO: Starting Backup of VM 102 (lxc)
INFO: Backup started at 2023-03-16 03:02:54
INFO: status = running
INFO: CT Name: PLEX
INFO: including mount point rootfs ('/') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'

1678958847384.png

Disks are mounted and works

Code:
root@NUC-i3:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                  7.8G     0  7.8G   0% /dev
tmpfs                 1.6G  1.6M  1.6G   1% /run
/dev/mapper/pve-root   25G  6.6G   17G  29% /
tmpfs                 7.8G   55M  7.7G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
/dev/sdc1             916G  152G  718G  18% /mnt/USB1TB
/dev/sdd1             458G  218G  217G  51% /mnt/USB500GB
/dev/sdb1             115G   51G   59G  47% /mnt/SSD128GB
/dev/sda2             511M  328K  511M   1% /boot/efi
/dev/fuse             128M   24K  128M   1% /etc/pve
tmpfs                 1.6G     0  1.6G   0% /run/user/0

How can i Fix it ?

Seems that stuck when make a backup of my lxc Plex, is a CT privileged, but this issue is random...
 
Hi,
seems like your node becomes unresponsive during backup.
Do you see any errors in the journal while running the backup?
Please post the output from around the time when the issue happens journalctl --since <date> --until <date>.
 
CIAO,
sembra che il tuo nodo non risponda durante il backup.
Vedi degli errori nel diario durante l'esecuzione del backup?
Pubblica l'output più o meno nel periodo in cui si verifica il problema journalctl --since <date> --until <date>.
 

Attachments

Okay,
so you have a lot of nfs timeout error messages in your logs. Is the NFS share related to the Plex LXC container, maybe mounted inside of it?

Please post also pct config 102 and cat /etc/pve/storage.cfg as well as pveversion -v
 
Okay,
so you have a lot of nfs timeout error messages in your logs. Is the NFS share related to the Plex LXC container, maybe mounted inside of it?

Please post also pct config 102 and cat /etc/pve/storage.cfg as well as pveversion -v
Yes, i have inside the lxc container an nfs in /etc/fstab about 192.168.178.111

Code:
root@NUC-i3:~# pct config 102
arch: amd64
cores: 4
features: fuse=1,mount=nfs;cifs,nesting=1
hostname: PLEX
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=F2:DE:D0:5C:A9:72,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-102-disk-0,size=96G
swap: 1024
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.cgroup2.devices.allow: c 212:* rwm
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/102/mount_hook.sh
lxc.mount.entry: /dev/dvb dev/dvb none bind,optional,create=dir 0, 0


root@NUC-i3:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,backup,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

dir: SSD128GB
        path /mnt/SSD128GB
        content rootdir,iso,images,snippets,vztmpl,backup
        shared 0

dir: USB1TB
        path /mnt/USB1TB
        content backup,snippets,images,iso,rootdir,vztmpl
        shared 0

dir: USB500GB
        path /mnt/USB500GB
        content vztmpl,iso,rootdir,snippets,images,backup
        prune-backups keep-last=1
        shared 0



root@NUC-i3:~# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.85-1-pve)
pve-manager: 7.3-6 (running version: 7.3-6/723bb6ec)
pve-kernel-helper: 7.3-6
pve-kernel-5.15: 7.3-2
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-2
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-6
libpve-storage-perl: 7.3-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.5.5
pve-cluster: 7.3-2
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20221111-1
pve-firewall: 4.2-7
pve-firmware: 3.6-3
pve-ha-manager: 3.5.1
pve-i18n: 2.8-3
pve-qemu-kvm: 7.2.0-5
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1
 
I would recommend to fix the NFS connection problem first, as it seems to be related to your backup issue. Have you checked if the backup goes trough without issues if you disconnect your NFS share? Also, it would make sense to manage the NFS storage on PVE side and handing it to the container as bind-mount, instead of mounting it in the container itself.
 
I would recommend to fix the NFS connection problem first, as it seems to be related to your backup issue. Have you checked if the backup goes trough without issues if you disconnect your NFS share? Also, it would make sense to manage the NFS storage on PVE side and handing it to the container as bind-mount, instead of mounting it in the container itself.
Backup works also manually.

This issue is random, and happend when backup job start.
Anyway I try now to mount nfs into lxc, with bind-mount. I don't know how to do that, I need to understand first.
 
I can't use mount point in lxc, because I use fuse system mount, and do that in host, isn't safe.
Please note that the use of fuse mounts inside the LXC is problematic for backup modes other than stopped, see https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pct_container_storage

The recommendation is to mount it on the host.

I don't backup vm where there is NFS Server, but doesn't change.
You mean that you encounter the issue also on Containers without NFS share mounted. Do these use a fuse mountpoint by any chance?

Where you able to fix the NFS connection issues? Please provide an up to date journal output in order to not chase ghosts from the past.
 
Please note that the use of fuse mounts inside the LXC is problematic for backup modes other than stopped, see https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pct_container_storage

The recommendation is to mount it on the host.


You mean that you encounter the issue also on Containers without NFS share mounted. Do these use a fuse mountpoint by any chance?

Where you able to fix the NFS connection issues? Please provide an up to date journal output in order to not chase ghosts from the past.

Maybe I'm a little bit confusing, so this is my setup:

LXC with a nfs client mounted, not fuse.

A Virtual machine with Ubuntu and NFS Server, with a fuse folder mounted.

Simple.

I don't make backup of this Virtual Machine with Ubuntu, but when i make an automatically backup of LXC, i have this issue, but is random
 
Maybe I'm a little bit confusing, so this is my setup:

LXC with a nfs client mounted, not fuse.

A Virtual machine with Ubuntu and NFS Server, with a fuse folder mounted.

Simple.

I don't make backup of this Virtual Machine with Ubuntu, but when i make an automatically backup of LXC, i have this issue, but is random
Okay, so now I understand your setup a bit better. Is the NFS share mounted in the LXC the one provided by the VM or is this another one?

Regarding your issue: As already stated before, the issue seems to be that sometimes the NFS share is not responding during the backup of the LXC. Without current backup and system log files it is hard to tell what's going on. Therefore, the suggestion remains to mount the share on the host and use a bind-mount instead, which is not included in the backup, in order to exclude or confirm it to be a NFS mountpoint related issue.
 
Okay, so now I understand your setup a bit better. Is the NFS share mounted in the LXC the one provided by the VM or is this another one?

Regarding your issue: As already stated before, the issue seems to be that sometimes the NFS share is not responding during the backup of the LXC. Without current backup and system log files it is hard to tell what's going on. Therefore, the suggestion remains to mount the share on the host and use a bind-mount instead, which is not included in the backup, in order to exclude or confirm it to be a NFS mountpoint related issue.
In LXC CT, NFS share is provided by the VM.

Mount on the host a fuse system is a problem, because use a lot of ramdisk, and in the host, i prefer to leave as clean as possibile, but, ok i'll try.
If can help other logs, i can gave you
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!