pct restore wasting disk space

Jose Carrion

New Member
Sep 7, 2017
2
0
1
49
Hi there,
I found a problem in my backup strategy. I'm not sure what i did wrong.

I have two remote proxmox nodes running 3 containers each one.
The running containers on the first node are daily backed up in the second node and vice versa.
The idea, is to have a restored recent version of each container of each node, to get some
redudancy.

Once every container backup is completed, the crontab restores the earliest backup with:

10 4 * * * find /backups/dump/*102* -mtime -1 -exec /usr/sbin/pct restore 102 '{}' --storage local-lvm -force -onboot 0 \;
30 4 * * * find /backups/dump/*101* -mtime -1 -exec /usr/sbin/pct restore 101 '{}' --storage local-lvm -force -onboot 0 \;
45 4 * * * find /backups/dump/*100* -mtime -1 -exec /usr/sbin/pct restore 100 '{}' --storage local-lvm -force -onboot 0 \;

The backup is restored properly but the restoring task leaves the old logical volume of every container and creates
a numered new logical volumes: vm-100-disk-1, vm-100-disk-2, vm-100-disk-3, etc... (wasting ssd space !!)

To solve this behavior first i tried to delete de previous restored container adding to crontab:

05 4 * * * pct destroy 102
25 4 * * * pct destroy 101
40 4 * * * pct destroy 100

but it did not work... and then I added:

06 4 * * * lvremove pve/vm-102-disk-1
26 4 * * * lvremove pve/vm-101-disk-1
41 4 * * * lvremove pve/vm-100-disk-1

The logs in /var/log/pve/tasks/.... just shows:

tar: This does not look like a tar archive
tar: Skipping to next header
tar: Exiting with failure status due to previous errors
TASK ERROR: ERROR: archive contains no configuration file
----------------------------------------------------------------
Using default stripesize 64.00 KiB.
For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100.
Logical volume "vm-102-disk-4" created.
WARNING: Sum of all thin volume sizes (444.00 GiB) exceeds the size of thin pool pve/data and the amount of free space in volume group (15.82 GiB)!
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: 4096/10485760 done
Creating filesystem with 10485760 4k blocks and 2621440 inodes
Filesystem UUID: 56bbe1e4-745a-4e66-a4aa-de06f41899c5
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624

Allocating group tables: 0/320 done
Writing inode tables: 0/320 done
Creating journal (65536 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: 0/320 done

extracting archive '/backups/dump/vzdump-lxc-102-2017_09_07-03_18_13.tar.gz'
Total bytes read: 3880540160 (3.7GiB, 123MiB/s)
Detected container architecture: amd64
TASK OK

Any ideas ??

Thanks
 
Hi there,


10 4 * * * find /backups/dump/*102* -mtime -1 -exec /usr/sbin/pct restore 102 '{}' --storage local-lvm -force -onboot 0 \;
30 4 * * * find /backups/dump/*101* -mtime -1 -exec /usr/sbin/pct restore 101 '{}' --storage local-lvm -force -onboot 0 \;
45 4 * * * find /backups/dump/*100* -mtime -1 -exec /usr/sbin/pct restore 100 '{}' --storage local-lvm -force -onboot 0 \;

The backup is restored properly but the restoring task leaves the old logical volume of every container and creates
a numered new logical volumes: vm-100-disk-1, vm-100-disk-2, vm-100-disk-3, etc... (wasting ssd space !!)

It should not work like this - yes, a new lvm is created (and not overwritten the old one), but after successful restore the old is usually deleted. Verify this again by making a manual restore (i.e. not launched by crontab).

The other attempts are not recommended since they would interfere into the configuration structure of Proxmox VE and may damage something.
 
Hi Richard.
In fact, I test It manually through the web ui and It worked fine. If i run the same comands (used in crontab) on the Shell, it continues leaving the previous lvm without deleting It.

Thanks for replaying.

Jose
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!