Restored a container backup, lost all my data on non-backup disk

Hi,
Just a thought, as I've recently discover the shared=1 option for mount points. Would adding to your mount line in the .conf file let proxmox know that it is a shared mount and therefore know not to delete it?

so for example in OPs 103.conf file, add ",shared=1" to the end of the "mp0:" line?
no. This shared is to tell Proxmox VE that a specific volume is shared between nodes (but it won't automagically share the volume itself) and thus doesn't need to be migrated. But it does not affect restore AFAIK.
 
Did this behavior change in recent versions?

I just created a test VM with 2 disks with the second disk excluded from backup. Immediately after backup I did a restore. The second disk was still there after restore, I just had to attach it.

Does this mean that when a restore is executed, disks that are not included in the backup will be kept instead of deleted?

I tried to find anything about this in the documentation but could not find anything.

PS: sorry to raise this thread from the dead
 
Does this mean that when a restore is executed, disks that are not included in the backup will be kept instead of deleted?
As far as I understand:
For VMs correct and it will be kept.
For LXCs it will be wiped.
 
  • Like
Reactions: mnih
Is there anything described in the documentation? For a test VM I can test and verify, that vDisks are kept after a backup. However for "production" I would very much like to be sure that those vDisks will be kept.
 
Hi,
Did this behavior change in recent versions?
no.

As far as I understand:
For VMs correct and it will be kept.
For LXCs it will be wiped.
Yes.

For VMs, only disks that are also part of the backup will be replaced (based on the disk key e.g. scsi0). Others are kept as unused.

For containers, the whole file system structure is backed up and restored as a whole. If you exclude a mountpoint volume from backup, it will be gone after restore. The UI has a warning about that.
 
  • Like
Reactions: mnih
I introduce myself into the club of people who wiped their mountpoints during restore :)
But I want to add one thing.
I know for sure that I did multiple restores of this exact machine on Proxmox 7.x and the mountpoint wasn't wiped, not once, only the bootdisk.
This is my first restore of the same container but on Proxmox 8.2.7 and the mountpoint got wiped.
The warning seemed to be the same so I didn't worry, the mountpoint did stay untouched before, right?
Not this time.
Am I hallucinating or was my config on Proxmox 7.x somehow special? (now that I think of it I'm not sure if I wasn't tinkering with the config file of the container and if the mountpoint wasn't specially configured).
 
  • Like
Reactions: Sasha
For containers, the whole file system structure is backed up and restored as a whole. If you exclude a mountpoint volume from backup, it will be gone after restore. The UI has a warning about that.
It's annoying, that we got a warning about bind mount point also. What's is wrong with it? Why PM think i'll delete bind mount point while restoring LXC from backup?
 
Last edited:
I enlist myself into the fine club of the anonymous restore mountpoint wiper.

If i click the button "exclude from backup", i'd also expect it's excluded from the restore-wipe. having a warning which is totally contradictory to what is expected... hmmm so the warning is also wrong.

you can restore into a new VMID and then use the new 'reassign disk feature'
restore to a new Container ID feels wrong. it will break the whole cotainer ID naming logic and documentation.
so i'd have to track container ID's in the Backup Server, which id was belonging to which container and when, so i could distinguish the correct backup...
That would also break my Prune logic in PBS.

also my LXC-ProxBackupServer is set up with that logic. :rolleyes:
1733523953925.png
thats unfortunate... so a restore is useless right now. as it will kill my backups... :eek:

i want the root disk of my containers captured in the backup.
The "data" disk(mount point) is on dispersed-gluster/raid1-raid5-mdadm/mirror-zfs/
this makes the restore as well as the backup itself lightweight and fast.


I'll have to manually create now a zfs dataset and manually add it as a mountpoint.
I always use lxc.mount.entry: /srv/host srv/container none bind,noatime 0 0, so Proxmox does not know about it.


any chance to have that feature "exclude mountpoint from restore/advanced restore options" soon available?
I already created a feature request to make it optional to wipe all disks of a guest on a restore: https://bugzilla.proxmox.com/show_bug.cgi?id=3783
 
Last edited:
the message is rather clear IMHO - I am sorry you misunderstood it of course.
That message is not clear. Glad I read this before I used it.

I got a subvol (subvol-104-disk) on my ZFS Storage pool. I want to kill lxc104 but keep that subvol, as it's mounted in many of my LXC as a shared Storage space.

Is there any way to do this?
 
  • Like
Reactions: Phyrene
Hi,
I got a subvol (subvol-104-disk) on my ZFS Storage pool. I want to kill lxc104 but keep that subvol, as it's mounted in many of my LXC as a shared Storage space.

Is there any way to do this?
one way would be to manually remove the container's configuration file and the root filesystem volume. But you'd always need to be careful not to remove owned volumes if you'd re-use that ID in the future.

You could also mark the container as protected and rename it so that you know it's just a dummy for the volume.

But a cleaner way might be to rename the volume, so that it is not considered "owned" by a specific container and use bind-mounts to mount it in multiple containers. Of course you'll need to manage backing up the data then, since bind mounts are not backed-up via container backup.
 
  • Like
Reactions: leesteken
Spun up a test container with a test volume, tried mv to rename it via the Proxmox cli:

mv: cannot move 'subvol-125-disk-1' to './renamed': Device or resource busy

detached the volume from 125 beforehand, but it´s still busy. How do I unmount it enough to rename it?



cp -R works but I can´t copy a 10 TB volume just to rename it :/
 
Last edited:
tried mv to rename it via the Proxmox cli:
ZFS hat its own semantic:

Code:
~# zfs create  rpool/data/subvol-9999-disk-0
~# touch /rpool/data/subvol-9999-disk-0/dummyfile

~# ls -al /rpool/data/subvol-9999-disk-0/
total 2
drwxr-xr-x 2 root root 3 Sep 19 21:06 .
drwxr-xr-x 4 root root 4 Sep 19 21:06 ..
-rw-r--r-- 1 root root 0 Sep 19 21:06 dummyfile

~# zfs rename rpool/data/subvol-9999-disk-0  rpool/data/renamed
~# ls -al /rpool/data/renamed/
total 2
drwxr-xr-x 2 root root 3 Sep 19 21:06 .
drwxr-xr-x 4 root root 4 Sep 19 21:06 ..
-rw-r--r-- 1 root root 0 Sep 19 21:06 dummyfile

~# zfs destroy rpool/data/renamed
 
  • Like
Reactions: leesteken