Restoring CT backup deletes&recreates non-backuped storage subvolume

nexos

New Member
Jun 15, 2021
3
0
1
32
Hi,
I tried to restore a CT from a local backup, but while restoration of the CT itself worked fine, my data storage volume (mp1) was deleted in the process. Additionally the 1T storage was recreated on `local` with 1T, but it was obviously empty. My data storage is not in the backup.
At the moment I'm using Proxmox 6.4-8, while the restoration attempt happened earlier, around 2021-01-28 with a at that time recent version.

I would be very grateful if anyone got some advice. This issue is keeping me from updating my container, because if it fails restoring the backup will destroy my data :S
Feel free to ask if you need any additional logs or configuration information.

Thanks in advance!

CT Config
arch: amd64 cores: 10 hostname: nextcloud memory: 8192 mp1: containers:subvol-101-disk-0,mp=/data,replicate=0,size=1T net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168xxx,hwaddr=xxx,ip=192.xxx,type=veth onboot: 1 ostype: debian rootfs: local:101/vm-101-disk-1.raw,size=8G startup: order=2 swap: 8192 unprivileged: 1

storage.cfg
dir: local path /var/lib/vz content vztmpl,snippets,images,iso,rootdir,backup prune-backups keep-all=1 zfspool: containers pool tank content images,rootdir mountpoint /tank sparse 0 dir: backup path /tank/backup content backup prune-backups keep-last=2,keep-weekly=2

Restore Log

Formatting '/var/lib/vz/images/101/vm-101-disk-1.raw', fmt=raw size=8589934592 mke2fs 1.44.5 (15-Dec-2018) Discarding device blocks: 4096/2097152 done Creating filesystem with 2097152 4k blocks and 524288 inodes Filesystem UUID: 9d03284c-8ae6-4cef-b1fb-7345b261d40b Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Allocating group tables: 0/64 done Writing inode tables: 0/64 done Creating journal (16384 blocks): done Multiple mount protection is enabled with update interval 5 seconds. Writing superblocks and filesystem accounting information: 0/64 done Formatting '/var/lib/vz/images/101/vm-101-disk-2.raw', fmt=raw size=1099511627776 mke2fs 1.44.5 (15-Dec-2018) Discarding device blocks: 4096/268435456 done Creating filesystem with 268435456 4k blocks and 67108864 inodes Filesystem UUID: 03191921-e03e-439b-be73-ab0887d7dfc7 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Allocating group tables: 0/8192 done Writing inode tables: 0/8192 done Creating journal (262144 blocks): done Multiple mount protection is enabled with update interval 5 seconds. Writing superblocks and filesystem accounting information: 0/8192 done extracting archive '/tank/backup/dump/vzdump-lxc-101-2021_01_28-11_26_25.tar.zst' tar: ./var/spool/postfix/deferred/4/4170743984: time stamp 2021-01-28 12:28:27 is 2243.615145626 s in the future tar: ./var/spool/postfix/deferred/1/1E6A2256ED: time stamp 2021-01-28 12:28:27 is 2243.614934425 s in the future Total bytes read: 2704209920 (2.6GiB, 177MiB/s) Detected container architecture: amd64 TASK OK
 
That also happed here. As a workaround using SMB/NFS shares are working.

There really should be a warning message that volumes excluded from backup will be destroyed on restore instead of just being ignored so that the old volume could be kept.
 
Last edited:
Shit, there should definitely be a warning.
Are you using a samba daemon on a separate container to server the files or did you install the samba daemon on the proxmox host?
Is there any other way to mount the data volume inside the container without proxmox noticing it?
 
Data is on a bare metal installed TrueNAS. But using a SMB server inside a VM or LXC should work too. And both unprivileged and privileged LXCs should work. The problem is that only privileged LXCs can mount SMB shares (and you need to allow it using the "cifs/nfs" feature first). But you could mount indirectly to unprivileged LXCs too my mounting the SMB share on your host and bind-mounting that SMB mountpoint into the LXC (but you will need to do all the user remapping stuff).
 
Thanks for the input and for pointing out the caveats. I think I'll use a minimal container for the smbd and mount it into my main container. I'll keep it as minimal as possible, so that I don't need a backup as I would run into the same problems when restoring the smb container.
 
I just got caught out by this CT restore behaviour.
This post seems to be the only mention of it that I can find. I may be mistaken, but don't remember this happening before. Is it a change in PVE7?
IMO it would be preferrable if the restore process did not delete and recreate those volumes specified in the CT config and excluded from the backup.
 
Last edited:
  • Like
Reactions: ukro
Any updates, how are you guys dealing with that. I have created samba lxc but when i remove and restore backup, it deletes my subvolume. It is very risky to use like this. But i like that i have pure access to the files...
 
I don't know if its a good solution, but it prevent proxmox from deleting subvolume.
When you create a CT, detach volume, make zfs rename of the volume, remove volume from CT from GUI, rename back the volume,then attach the volume from console without : sign as for zfs
pct set 110 -mp0 rpool/subvol-110-disk-1,mp=/mnt/data,size=2000G
After this, volume will not be deleted on remove or on restore.
Just one small thing, when you do restore it will create disk-2, just detach,remove and then go to console and pct set disk-1 manually.
 
@ukro its a working workaround!

Nevertheless i do not understand why this is the default behaviour of proxmox and why there isn't a warning at least. This got me in to quite some trouble in the last days. In the end, thanks to backups, i was able to resolve it, but i did not expect the restore to work this way!
 
  • Like
Reactions: ukro

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!