[SOLVED] Issue restoring a LXC

Dec 19, 2019
6
0
1
52
Hello,

I have a Proxmox 5.4-13 cluster with 2 nodes and 15 LXC containers, one of them with a mail server, I have no problems restoring the others, but the email server copy (75GB) It's been two days and it's not over, the job has been running for 55 hours and it hasn't finish yet. Same problem in both nodes, Is this normal?

Using default stripesize 64.00 KiB.
Logical volume "vm-122-disk-0" created.
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: 4096/78643200 done
Creating filesystem with 78643200 4k blocks and 19660800 inodes
Filesystem UUID: cac6b7ef-78a1-4ad1-9f7d-9a00d2335ed6
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616

Allocating group tables: 0/2400 done
Writing inode tables: 0/2400 done
Creating journal (262144 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: 0/2400 done

extracting archive '/mnt/pve/NAS01_Virtualizacion/dump/vzdump-lxc-114-2019_12_17-03_00_02.tar.gz'

With the Owncloud LXC 90GB backup takes only 3 hours to restore
 
hi,

can you post the config of the email server CT? pct config CTID

also please pveversion -v
 
hi,

here it is:

arch: amd64
cores: 4
hostname: correo.clm-granada.com
memory: 4096
net0: name=eth0,bridge=vmbr2,firewall=1,gw=150.214.95.222,hwaddr=C2:39:4F:A9:FA:dE,ip=150.214.95.157/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-114-disk-0,size=300G
swap: 2048

proxmox-ve: 5.4-2 (running kernel: 4.15.18-24-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-12
pve-kernel-4.13: 5.2-2
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.18-23-pve: 4.15.18-51
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-17-pve: 4.15.18-43
pve-kernel-4.15.18-16-pve: 4.15.18-41
pve-kernel-4.15.18-15-pve: 4.15.18-40
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-13-pve: 4.15.18-37
pve-kernel-4.15.18-12-pve: 4.15.18-36
pve-kernel-4.15.18-11-pve: 4.15.18-34
pve-kernel-4.15.18-10-pve: 4.15.18-32
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.18-2-pve: 4.15.18-21
pve-kernel-4.15.18-1-pve: 4.15.18-19
pve-kernel-4.15.17-3-pve: 4.15.17-14
pve-kernel-4.15.17-2-pve: 4.15.17-10
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.15.10-1-pve: 4.15.10-4
pve-kernel-4.15.3-1-pve: 4.15.3-1
pve-kernel-4.13.16-4-pve: 4.13.16-51
pve-kernel-4.13.16-3-pve: 4.13.16-50
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.16-1-pve: 4.13.16-46
pve-kernel-4.13.13-6-pve: 4.13.13-42
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-56
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-7
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-41
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-54
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
 
Last edited:
firstly i'd recommend you to upgrade your PVE.

my guess is since you're backing up to a NAS, the slowness might be from network transfer speed between NAS and PVE host
 
does it print any errors at any point, or does it just hang?

can you mount the container manually via pct mount CTID

is owncloud container also on local-lvm?
 
is owncloud container also on local-lvm? yes

mounted CT 114 in '/var/lib/lxc/114/rootfs'

Yesterday, I did 2 backup the tar.gz take more than 3 hours 75GB, the tar.lzo take 2,30 hours and is 105GB

I'm trying to restore the tar.lzo backup, and the network trafic was 30mb/s during 1,14 hour more o less the same as the job reading the backup (30mb/s DISK READ monitoring with iotop), and now nothing significant with iotop but monitoring with top:

34142 root 20 0 26796 3256 2348 R 100,0 0,0 79:39.87 tar

tar is the process with more CPU

Nothing in the syslog about the restore job, and stil running

Thank you
 
Last edited:
hi,

great.

but it's still weird why that container would take so long to backup.

is the VM you created on the same node and storage?

i'd like to try reproducing the issue and fix it if it's a bug, but probably we're missing something
 
hi,

Same host, DELL POWEREDGE R740, and NAS

what occurs to me is that being almost 600,000 small files (the emails) with their attachments produces a slowdown that makes the restoration impossible. Is the only backup that gives me problems, I have 2 VM and 15 LXC in two hosts with same network hardware and NAS

if you need a log file I can send it to you
 
if you need a log file I can send it to you
sure, just attach it here

btw is it possible for you to upgrade the cluster? maybe it's a non-issue in the newer version
 
Hello,

I have a Proxmox 5.4-13 cluster with 2 nodes and 15 LXC containers, one of them with a mail server, I have no problems restoring the others, but the email server copy (75GB) It's been two days and it's not over, the job has been running for 55 hours and it hasn't finish yet. Same problem in both nodes, Is this normal?

Using default stripesize 64.00 KiB.
Logical volume "vm-122-disk-0" created.
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: 4096/78643200 done
Creating filesystem with 78643200 4k blocks and 19660800 inodes
Filesystem UUID: cac6b7ef-78a1-4ad1-9f7d-9a00d2335ed6
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616

Allocating group tables: 0/2400 done
Writing inode tables: 0/2400 done
Creating journal (262144 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: 0/2400 done

extracting archive '/mnt/pve/NAS01_Virtualizacion/dump/vzdump-lxc-114-2019_12_17-03_00_02.tar.gz'

With the Owncloud LXC 90GB backup takes only 3 hours to restore

I have the same issue. It's now running 1 hour for a new created lxc with 8GB disk. The backup is located on a disk station. A 30GB vm took for me about 10 minutes
 
That with restore applies for unprivilged containers with docker installed(nesting,keyctl enabled)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!