Issue with restore LXC including mounted additional disk

Tomahawk

Member
Feb 11, 2022
17
0
6
40
Dear All,

I am having issues with restore the LXC container including mounted additional disk. To be clear I have restored container properly and it started and accesible but Ian't see any data in the mount point where additional disk was added.

In my restored LXC I can see the disk as it was before restore:

root@plexlxc:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/nvme-vm--103--disk--1 7.8G 3.5G 4.0G 47% /
/dev/mapper/nvme-vm--103--disk--2 3.9T 32K 3.7T 1% /mnt/media
none 492K 4.0K 488K 1% /dev
udev 32G 0 32G 0% /dev/dri
tmpfs 32G 4.0K 32G 1% /dev/shm
tmpfs 13G 1.6M 13G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock

but I can't see any data in this mountpoint. I had a lot of stuff there..

root@plexlxc:~# ll /mnt/media/
total 24
drwxr-xr-x 3 root root 4096 Feb 14 12:19 .
drwxr-xr-x 5 root root 4096 Feb 14 12:19 ..
drwx------ 2 root root 16384 Feb 14 12:19 lost+found

The disks are attached to proxmox server via Thunderbolt 4 directory (the second one is used for LXC):

1707911458982.png

What I am doing wrong ? How can I recover my data ?
 

Attachments

  • 1707911400762.png
    1707911400762.png
    40.2 KB · Views: 4
  • 1707911424499.png
    1707911424499.png
    49.7 KB · Views: 4
Last edited:
Hi,
were the mount points part of the backup? Please share the container configuration pct config <VMID>. If the mount points were not part of the backup, you probably accepted to overwrite the data in the prompt before the restore. In that case you will have to restore the data from a different backup source.

For reference, see also https://bugzilla.proxmox.com/show_bug.cgi?id=3783
 
Hi Chris,

Thank you for the quick response. BElow you can find the configuration:

root@pve:~# pct config 103
arch: amd64
cores: 2
description: <div align='center'><a href='https://helper-scripts.com'><img src='https://raw.githubusercontent.com/tteck/Proxmox/main/misc/images/logo-81x112.png'/></a>%0A%0A # Plex LXC%0A%0A <a href='https://ko-fi.com/D1D7EP4GF'><img src='https://img.shields.io/badge/&#x2615;-Buy me a coffee-blue' /></a>%0A </div>%0A
features: nesting=1
hostname: plexlxc
memory: 2048
mp0: nvme:vm-103-disk-1,mp=/mnt/media,size=3970G
nameserver: 8.8.8.8
net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=E2:46:A0:CF:91:03,ip=192.168.1.228/24,mtu=1500,type=veth
onboot: 1
ostype: debian
rootfs: nvme:vm-103-disk-1,size=8G
searchdomain:
swap: 512
tags: proxmox-helper-scripts
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/serial/by-id dev/serial/by-id none bind,optional,create=dir
lxc.mount.entry: /dev/ttyUSB0 dev/ttyUSB0 none bind,optional,create=file
lxc.mount.entry: /dev/ttyUSB1 dev/ttyUSB1 none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM0 dev/ttyACM0 none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM1 dev/ttyACM1 none bind,optional,create=file
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
 
Last edited:
mp0: nvme:vm-103-disk-2,mp=/mnt/media,size=3970G
Yes, looks like this mount point is not included in the backup, there is no backup=1 flag set for it. I am afraid you wiped your data and will have to restore it from a backup of that mountpoints data.
 
Thank you for the explanation. So to avoid it in the future I need to incude additional disk also in the backup ? Or there is other possibility to not backup second huge disk and not to wipe it during restore ? I am using PVE 7.1-7
1707925374216.png
 
Last edited:
Best is to include the disk (which will of course increase backup times and storage requirements). Alternatively, you could restore the container to a different VM ID, detach the mount point, reassign the the unused disk to the newly restored container and reattach it as mountpoint.

I am using PVE 7.1-7

Although, without looking it up I am not sure if that functionality is already present in the Proxmox VE version you are using (another reason to upgrade).

Please upgrade to at least Proxmox VE 7.4, even better to the latest version 8.1... Your version is end of life and does not receive any security updates anymore.

Edit: Typo
 
  • Like
Reactions: Tomahawk
Dear All,

I am facing just another issue. Namely I have the same container as above mentioned with additional disk attached :
mp0: nvme:vm-103-disk-2,mp=/mnt/media,size=3970G

and with this additional disk I am getting issues with low PVE space. How attached disk can have impact on this PVE free space ? How can i change it ?


1708777965523.png

1708777995073.png


root@pve:~# pct config 103
arch: amd64
cores: 2
description: <div align='center'><a href='https://helper-scripts.com'><img src='https://raw.githubusercontent.com/tteck/Proxmox/main/misc/images/logo-81x112.png'/></a>%0A%0A # Plex LXC%0A%0A <a href='https://ko-fi.com/D1D7EP4GF'><img src='https://img.shields.io/badge/&#x2615;-Buy me a coffee-blue' /></a>%0A </div>%0A
features: nesting=1
hostname: plexlxc
memory: 2048
mp0: nvme:vm-103-disk-1,mp=/mnt/media,size=3970G
nameserver: 8.8.8.8
net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=E2:46:A0:CF:91:03,ip=192.168.1.228/24,mtu=1500,type=veth
onboot: 1
ostype: debian
rootfs: nvme:vm-103-disk-0,size=8G
searchdomain:
swap: 512
tags: proxmox-helper-scripts
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/serial/by-id dev/serial/by-id none bind,optional,create=dir
lxc.mount.entry: /dev/ttyUSB0 dev/ttyUSB0 none bind,optional,create=file
lxc.mount.entry: /dev/ttyUSB1 dev/ttyUSB1 none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM0 dev/ttyACM0 none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM1 dev/ttyACM1 none bind,optional,create=file
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file


-------------------------------

root@plexlxc:~# df -h
df: /mnt/truenas: Host is down
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/nvme-vm--103--disk--0 7.8G 3.7G 3.8G 49% /
/dev/mapper/nvme-vm--103--disk--1 3.9T 1.4T 2.3T 38% /mnt/media
none 492K 4.0K 488K 1% /dev
udev 32G 0 32G 0% /dev/dri
tmpfs 32G 4.0K 32G 1% /dev/shm
tmpfs 13G 1.7M 13G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
 

Attachments

  • 1708778275995.png
    1708778275995.png
    3.2 KB · Views: 3
  • 1708778285738.png
    1708778285738.png
    3.2 KB · Views: 3
Last edited:
Thank you for the explanation. So to avoid it in the future I need to incude additional disk also in the backup ? Or there is other possibility to not backup second huge disk and not to wipe it during restore ? I am using PVE 7.1-7
View attachment 63152


If you do not want to backup a huge disk but still want to reuse the data, then you should restore the lxc with a different lxc ID this would only restore the rootfs and the other small disks if you have them. After that you could transfer ownership of the huge disk to the newly restore lxc
 
But when I want to attach the disk after restored LXC to different ID it ask me to assign for disk size and even If i will allocate full disk then the disk is empty without the data I had previously.

1708958283477.png
 

Attachments

  • 1708958230519.png
    1708958230519.png
    16.7 KB · Views: 1
But when I want to attach the disk after restored LXC to different ID it ask me to assign for disk size and even If i will allocate full disk then the disk is empty without the data I had previously.

View attachment 63786


Like Chris said, you did not backup your huge disk. And when you did restore lxc 103, the lxc was present. Then it should have given you a warning saying it will overwrite you data. If you accepted that, then your huge data got overwritten. So i hope you have backup of the huge data somewhere...
What i was trying to say in my lost post was this:
Assuming lxc 103 still exist (before you did restore) and the data are still there but you F*** up your lxc because idk you installed something and did not work afterwards or what have you. Then you should have restored your lxc in lxc id let us say 104. Then after that restore you could then go to 104 and check if there is a mountpoint (assuming not becuase no backup exist). Then you could go to 103 select the huge disk and transfer ownership to 104 like this:

1708960597899.png

What you are trying to do up there is create a new disk, which of course would be empty
 
Last edited:
  • Like
Reactions: Tomahawk
Thank you for your help! Unfortunately this option of reassign volume is not available in my version of proxmox 7.1..

1708977102842.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!