Restored a container backup, lost all my data on non-backup disk

mlazzarotto

Member
Dec 30, 2021
23
8
8
33
I have this LXC container name 'fileserver', based on the Turnkey fileserver template.
It has a main disk with 4GB and attached as Mount Point (mp0) another disk, but with size 100G.
On the bigger disk I used to keep my movies.
I set the 100GB disk to don't backup, because, you know, I don't need to backup the movies, right?
I scheduled a daily backup for all my VMs.
Today I restored the backup (because in the meantime I was trying to enable NFS server for Plex) and **puff** the movies are gone.

current config
1642625499470.png

Now, loosing the movies is not a big deal, but I want to understand what I did wrong to avoid this mistake again.
Is there an alternative rather than backup 80GB of movies every night just to backup the LXC container config?
 
Last edited:
Right now a restore will first wipe the complete guest with all its disks and then create a new VM based on the data of the backups.
Well, now I understand. Since I'm only a novice user of ProxMox, I would never thought of this.

Next time that I want to restore a backup of a VM with attached disk (not backupped), should I detach first the movies' disk and only after restore the VM?
 
you can restore into a new VMID and then use the new 'reassign disk feature' (qm move-disk / pct move-volume with a target-vmid) to move the non-backed-up volume from the old to the new guest..
 
you can restore into a new VMID and then use the new 'reassign disk feature' (qm move-disk / pct move-volume with a target-vmid) to move the non-backed-up volume from the old to the new guest..
Oh yeah, this is a good idea indeed!
But will I lose my attached disk if I delete the original LXC/VM?
 
if you delete it while it's still owned by that guest, yes.
 
I'm doing scheduled daily backup for all my VMs & CTs, some data disk are not included in order to reduce size.
Luckily I found this thread otherwise I'll lose my data one day restoring the backup !
I already created a feature request to make it optional to wipe all disks of a guest on a restore:
In my opinion PVE definitely can do better in this subject, only have a warning "This will permanently erase current VM data." is not clear enough.

you can restore into a new VMID
Could you point me where I can find this option ?
 
  • Like
Reactions: ExtremeDude
Could you point me where I can find this option ?
When you restore a backup from your backup storage, instead of clicking the restore button at your VM, PVE will ask you on which storage to restore the guest and which VMID you want to use.
 
ok... so, found this post after searching why bij whole 20TB of backup data is gone after a restore of the LXC......
I have a TKL Fileserver as LXC.
2G of root disk and a 20TB as mp0
the 20TB mp0 isn't backupped because a backup of a backup of a backup is quite stupid.
After some fiddeling around to try SMB as share, I decided to stay at NFS.
To quickly undo my fideling, I restored the LXC from the day before (using PBS).

But now found out ALL of the data on mp0 is empty :mad::mad::mad:

How can I restore this data?
Is it destroyed? of disabled?
Can I restore, or is 20TB of data gone?
 
if you restored over your existing container, then the volume is gone - which is why there is a warning that such an action will overwrite the existing container:

CT XXX - Restore. This will permanently erase current VM data.
Yes No
 
Yes, of the VM.
Not the attached mount.

The 2GB root disk is the disk to restore, so that those 2GB of data is gone is ok.
But erasing the 20TB data that is only mounted to the LXC shouldn't be done.

Is it possible to restore this data? (on ZFS)
How to restore a LXC in the future without losing all the data on the mountpoints?
 
Yes, of the VM.
Not the attached mount.

The 2GB root disk is the disk to restore, so that those 2GB of data is gone is ok.
But erasing the 20TB data that is only mounted to the LXC shouldn't be done.

the message is rather clear IMHO - I am sorry you misunderstood it of course.

Is it possible to restore this data? (on ZFS)
depends on what happened since then - you can attempt to poweroff the system, import it read-only using a previous TXG and see if the dataset is still there.
How to restore a LXC in the future without losing all the data on the mountpoints?
restore it into a new VMID if you don't want to overwrite the existing container (you can then re-assign volumes from the old container to the new one if they were not part of the backup). there are also some ideas to maybe support some form of 'partial restore' - but that gets confusing / complex quickly, which is something that should obviously be avoided when the question of 'is this volume overwritten or not' is concerned.
 
the message is rather clear IMHO - I am sorry you misunderstood it of course.
Would be nice if the message was more clear, I'm not the first one that has this problem.

depends on what happened since then - you can attempt to poweroff the system, import it read-only using a previous TXG and see if the dataset is still there.
After I found out (didn't write anything to it) I shutdown the LXC.
Where can I find some good documentation about the process? (It is a RAIDz2 setup with 6 14TB disks).

restore it into a new VMID if you don't want to overwrite the existing container (you can then re-assign volumes from the old container to the new one if they were not part of the backup). there are also some ideas to maybe support some form of 'partial restore' - but that gets confusing / complex quickly, which is something that should obviously be avoided when the question of 'is this volume overwritten or not' is concerned.
So, a restore of VMID 400 to VMID 401, and than change the config to mount the mp0 of VMID 400 should do the trick?
 
  • Like
Reactions: Sasha
something like this:
https://lists.freebsd.org/pipermail/freebsd-hackers/2013-July/043125.html

although I am not sure whether I'd want to risk it on a production pool without having full backups of all the data on it (or all the vdevs backing it).

there are also some proprietary/commercial tools that allegedly allow scanning a pool for deleted datasets/files for recovery.

using 'zdb -hhe POOL' (using a live cd that is ZFS capable) should give you some information about which TXG destroyed the dataset, so you can try to extract data using either zdb or a read-only import of the/a TXG before that..
 
Just stepped into that "clever" PM feature. I mean backup=0 option of storage mount points. The "message is rather clear". No comments.

Look, what do You think, guys, if to create separate LXC for storing that mount points but accessing from others LXCs (that trend to be ovewrited) ?

What is the correct way to access storage mount point of one LXC from another one?
 
Last edited:
Just a thought, as I've recently discover the shared=1 option for mount points. Would adding to your mount line in the .conf file let proxmox know that it is a shared mount and therefore know not to delete it?

so for example in OPs 103.conf file, add ",shared=1" to the end of the "mp0:" line?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!