Restored a container backup, lost all my data on non-backup disk

Hi,
Just a thought, as I've recently discover the shared=1 option for mount points. Would adding to your mount line in the .conf file let proxmox know that it is a shared mount and therefore know not to delete it?

so for example in OPs 103.conf file, add ",shared=1" to the end of the "mp0:" line?
no. This shared is to tell Proxmox VE that a specific volume is shared between nodes (but it won't automagically share the volume itself) and thus doesn't need to be migrated. But it does not affect restore AFAIK.
 
Did this behavior change in recent versions?

I just created a test VM with 2 disks with the second disk excluded from backup. Immediately after backup I did a restore. The second disk was still there after restore, I just had to attach it.

Does this mean that when a restore is executed, disks that are not included in the backup will be kept instead of deleted?

I tried to find anything about this in the documentation but could not find anything.

PS: sorry to raise this thread from the dead
 
Does this mean that when a restore is executed, disks that are not included in the backup will be kept instead of deleted?
As far as I understand:
For VMs correct and it will be kept.
For LXCs it will be wiped.
 
  • Like
Reactions: mnih
Is there anything described in the documentation? For a test VM I can test and verify, that vDisks are kept after a backup. However for "production" I would very much like to be sure that those vDisks will be kept.
 
Hi,
Did this behavior change in recent versions?
no.

As far as I understand:
For VMs correct and it will be kept.
For LXCs it will be wiped.
Yes.

For VMs, only disks that are also part of the backup will be replaced (based on the disk key e.g. scsi0). Others are kept as unused.

For containers, the whole file system structure is backed up and restored as a whole. If you exclude a mountpoint volume from backup, it will be gone after restore. The UI has a warning about that.
 
  • Like
Reactions: mnih
I introduce myself into the club of people who wiped their mountpoints during restore :)
But I want to add one thing.
I know for sure that I did multiple restores of this exact machine on Proxmox 7.x and the mountpoint wasn't wiped, not once, only the bootdisk.
This is my first restore of the same container but on Proxmox 8.2.7 and the mountpoint got wiped.
The warning seemed to be the same so I didn't worry, the mountpoint did stay untouched before, right?
Not this time.
Am I hallucinating or was my config on Proxmox 7.x somehow special? (now that I think of it I'm not sure if I wasn't tinkering with the config file of the container and if the mountpoint wasn't specially configured).
 
  • Like
Reactions: Sasha
For containers, the whole file system structure is backed up and restored as a whole. If you exclude a mountpoint volume from backup, it will be gone after restore. The UI has a warning about that.
It's annoying, that we got a warning about bind mount point also. What's is wrong with it? Why PM think i'll delete bind mount point while restoring LXC from backup?
 
Last edited:
I enlist myself into the fine club of the anonymous restore mountpoint wiper.

If i click the button "exclude from backup", i'd also expect it's excluded from the restore-wipe. having a warning which is totally contradictory to what is expected... hmmm so the warning is also wrong.

you can restore into a new VMID and then use the new 'reassign disk feature'
restore to a new Container ID feels wrong. it will break the whole cotainer ID naming logic and documentation.
so i'd have to track container ID's in the Backup Server, which id was belonging to which container and when, so i could distinguish the correct backup...
That would also break my Prune logic in PBS.

also my LXC-ProxBackupServer is set up with that logic. :rolleyes:
1733523953925.png
thats unfortunate... so a restore is useless right now. as it will kill my backups... :eek:

i want the root disk of my containers captured in the backup.
The "data" disk(mount point) is on dispersed-gluster/raid1-raid5-mdadm/mirror-zfs/
this makes the restore as well as the backup itself lightweight and fast.


I'll have to manually create now a zfs dataset and manually add it as a mountpoint.
I always use lxc.mount.entry: /srv/host srv/container none bind,noatime 0 0, so Proxmox does not know about it.


any chance to have that feature "exclude mountpoint from restore/advanced restore options" soon available?
I already created a feature request to make it optional to wipe all disks of a guest on a restore: https://bugzilla.proxmox.com/show_bug.cgi?id=3783
 
Last edited:
  • Like
Reactions: tw9mini and Sasha

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!