Hi,
I have an container and did a backup using proxmox GUI. It has an mp0 configured like the rootfs. I'm surprised of the behavior and I don't like surprises in backups.
First, although configuration for rootfs and mp0 in GUI look "the same" (especially neither mentioning backup=0), only rootfs is backed up. Apparently, for rootfs backup=1 is default, but for mp0 backup=0 is default and I think this is wrong and should be changed. If they had the same defaults, the backup would have been empty or correct and the error had been noticed or not happend.
But even worse; when restoring, the mp0 disk was still available and correct, as it was on a external storage (a NAS), but the Proxmox restore function killed the disk. I think this is bad behavior. if a disk is not included in backup, it should not be included in restore, and especially not be "restored" by killing (erasing/wiping/deleting/mkfs) the entire disk (that did not have been backed up). In my opinion, this possibly is the worst possible implementation and I think it is wrong and must be fixed.
Is there a workaround? "Remember before restore and manually do..." IMHO is not working; the one who does the restore might not know this, so it won't help.
Or im I missing something? Do I misunderstand the situation?
I think my issue could be related to https://forum.proxmox.com/threads/lxc-mp0-data-loss-after-backup-restore.162861/. This mentiones https://bugzilla.proxmox.com/show_bug.cgi?id=3783 from 2021.
Currently I feel quite bad that a little simply bug that is known for 3 years keeps destroying peoples data. Shouldn't such things fixed with highest priority? A lot of things can be recovered from, as long as the backups are reliable! So I think, they must be reliable!
Steffen
ps:
I make a new post also to spread the knowledge that there is a IMHO major issue in backup and hope to possibly help someone in the same trap, who did not yet notice.
So if you are using multiple mount points in containers, please remember that they might be killed on restore.
I have an container and did a backup using proxmox GUI. It has an mp0 configured like the rootfs. I'm surprised of the behavior and I don't like surprises in backups.
First, although configuration for rootfs and mp0 in GUI look "the same" (especially neither mentioning backup=0), only rootfs is backed up. Apparently, for rootfs backup=1 is default, but for mp0 backup=0 is default and I think this is wrong and should be changed. If they had the same defaults, the backup would have been empty or correct and the error had been noticed or not happend.
But even worse; when restoring, the mp0 disk was still available and correct, as it was on a external storage (a NAS), but the Proxmox restore function killed the disk. I think this is bad behavior. if a disk is not included in backup, it should not be included in restore, and especially not be "restored" by killing (erasing/wiping/deleting/mkfs) the entire disk (that did not have been backed up). In my opinion, this possibly is the worst possible implementation and I think it is wrong and must be fixed.
Is there a workaround? "Remember before restore and manually do..." IMHO is not working; the one who does the restore might not know this, so it won't help.
Or im I missing something? Do I misunderstand the situation?
I think my issue could be related to https://forum.proxmox.com/threads/lxc-mp0-data-loss-after-backup-restore.162861/. This mentiones https://bugzilla.proxmox.com/show_bug.cgi?id=3783 from 2021.
Currently I feel quite bad that a little simply bug that is known for 3 years keeps destroying peoples data. Shouldn't such things fixed with highest priority? A lot of things can be recovered from, as long as the backups are reliable! So I think, they must be reliable!
Steffen
ps:
I make a new post also to spread the knowledge that there is a IMHO major issue in backup and hope to possibly help someone in the same trap, who did not yet notice.
So if you are using multiple mount points in containers, please remember that they might be killed on restore.
Code:
arch: amd64
cores: 4
features: nesting=1
hostname: repo1
memory: 8192
mp0: nas1:100/vm-100-disk-1.raw,mp=/mp0,mountoptions=noatime,size=100G
net0: name=eth0,bridge=vmbr0,firewall=1,gw=...,ip....,type=veth
net1: name=eth1,bridge=vmbr0,firewall=1,hwaddr=....,ip=dhcp,tag=60,type=veth
onboot: 1
ostype: debian
rootfs: nas1:100/vm-100-disk-0.raw,size=24G
swap: 4096
unprivileged: 1
Code:
2025-02-18 19:32:01 INFO: Starting Backup of VM 100 (lxc)
2025-02-18 19:32:01 INFO: status = running
2025-02-18 19:32:01 INFO: CT Name: repo1
2025-02-18 19:32:01 INFO: including mount point rootfs ('/') in backup
2025-02-18 19:32:01 INFO: excluding volume mount point mp0 ('/mp0') from backup (disabled)
2025-02-18 19:32:01 INFO: backup mode: snapshot
2025-02-18 19:32:01 INFO: ionice priority: 7
2025-02-18 19:32:01 INFO: create storage snapshot 'vzdump'
2025-02-18 19:32:04 INFO: creating vzdump archive '/mnt/pve/nas1/dump/vzdump-lxc-100-2025_02_18-19_32_01.tar.zst'
2025-02-18 19:32:21 INFO: Total bytes written: 2022993920 (1.9GiB, 112MiB/s)
2025-02-18 19:32:26 INFO: archive file size: 1.02GB
2025-02-18 19:32:26 INFO: adding notes to backup
2025-02-18 19:32:26 INFO: marking backup as protected
2025-02-18 19:32:26 INFO: cleanup temporary 'vzdump' snapshot
2025-02-18 19:32:26 INFO: Finished Backup of VM 100 (00:00:25)
Code:
recovering backed-up configuration from 'nas1:backup/vzdump-lxc-100-2025_03_07-14_23_49.tar.zst'
Formatting '/mnt/pve/nas1/images/100/vm-100-disk-0.raw', fmt=raw size=25769803776 preallocation=off
Creating filesystem with 6291456 4k blocks and 1572864 inodes
Filesystem UUID: 3a502e0f-c9fd-4c24-8d6c-6ef3ceb450b6
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
Formatting '/mnt/pve/cnas1/images/100/vm-100-disk-1.raw', fmt=raw size=107374182400 preallocation=off
Creating filesystem with 26214400 4k blocks and 6553600 inodes
Filesystem UUID: e339ef7e-7b36-45df-8752-a3a27dbafaf7
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
restoring 'nas1:backup/vzdump-lxc-100-2025_03_07-14_23_49.tar.zst' now..
extracting archive '/mnt/pve/nas1/dump/vzdump-lxc-100-2025_03_07-14_23_49.tar.zst'
Total bytes read: 2033356800 (1.9GiB, 100MiB/s)
merging backed-up and given configuration..
TASK OK
Last edited: