Failed container upgrade - not sure how to restore

neek

Active Member
Oct 22, 2017
16
0
41
56
I have an LXC container which I wanted to upgrade from Ubuntu 22.04 to 24.04 last night. I took a backup, then ran the upgrade. All seemed to go well.

Now, the container boots, but never brings up a prompt when I attach to it, either through the web interface, or when I 'pct console <lxc-id>' to it. I'll debug that separately, but for now, I'd like to just restore it from the backup. When I choose to restore from the web gui, it asks which storage to use (I assume to write the new copy of the root image?) and then gives me the warning that "This will permanently erase current CT data. Mount point volumes are also erased."

In the lxc config file, I have the root on a storage called 'zfsdata', plus a mountpoint:
Code:
arch: amd64
cores: 2
hostname: nextcloud
memory: 2048
mp0: /tank/nextcloud_files,mp=/mnt/nextcloud_files
nameserver: 192.168.1.1
net0: name=eth0,bridge=vmbr0,hwaddr=1A:A0:FB:F1:A7:09,ip=dhcp,ip6=dhcp,tag=10,type=veth
onboot: 1
ostype: ubuntu
rootfs: zfsdata:subvol-104-disk-0,size=16G
swap: 2048

I do not want to erase / modify the data on the mountpoint at /tank/nextcloud_files. Can someone please confirm that the restore will only modify the rootfs?

Thank you
 
I have an LXC container which I wanted to upgrade from Ubuntu 22.04 to 24.04 last night. I took a backup, then ran the upgrade. All seemed to go well.

Now, the container boots, but never brings up a prompt when I attach to it, either through the web interface, or when I 'pct console <lxc-id>' to it.
Code:
arch: amd64
cores: 2
hostname: nextcloud
memory: 2048
mp0: /tank/nextcloud_files,mp=/mnt/nextcloud_files
nameserver: 192.168.1.1
net0: name=eth0,bridge=vmbr0,hwaddr=1A:A0:FB:F1:A7:09,ip=dhcp,ip6=dhcp,tag=10,type=veth
onboot: 1
ostype: ubuntu
rootfs: zfsdata:subvol-104-disk-0,size=16G
swap: 2048

Try with enabled nesting: [1].

I can not answer your actual restore question, sorry.

[1] https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_options
 
It's recommend you can restore the LXC as an new VMID, and then reassigned original LXC's mp0 to the new restored VMID. I think it will keep your data safe.1.Origional_LXC.png2.Restor_to_Diff-VMID.png3.Reassign_mp0_to_new_VMID.png
 
Last edited:
It's recommend you can restore the LXC as an new VMID, and then reassigned original LXC's mp0 to the new restored VMID. I think it will keep your data safe.
Thank you, David. It looks like you restored a secondary disk, rather than the root. Do you then manually edit the mountpoints in the container's config file to reassign it as the root file system?
 
Hi neek:
For above example, because it's our important SAMBA server, and the mp0 will occupied near to 6T capacity, I'm use zfs replication to protect it between two nodes, instead of traditional backup/restore task.. ^_^
I only do the backup/restore task on Root Disk to keep the account up-to-date.

Regards.
 
For anyone who stumbles onto this thread: I ended up backing up the container's conf file (in /etc/pve/lxc/XXX.conf), removing the mountpoint from it, and then restoring the root disk from backup. Worked great. But still, for some reason, I'm unable to do a software update of that container from Ubuntu 22.04 -> 24.04. I'll go research that separately.

david_tao, thank you again!
 
But still, for some reason, I'm unable to do a software update of that container from Ubuntu 22.04 -> 24.04. I'll go research that separately.
Did you enable nesting (and restart the container)? That's required (and enabled by default for new containers on recent Proxmox versions) for newer GNU/Linux distributions to work inside container? Maybe you did enable it but I could not find it in your posts in this thread.