Restore Container fails

robbie

New Member
Nov 14, 2024
5
0
1
Hello all,

I have several conainter running on pve and I backup them with a pbs. There's one container which I cannot restore from that backup. When I try to restore I get an error:

recovering backed-up configuration from 'XPR_Backup_02:backup/ct/182/2024-11-13T09:47:41Z'
TASK ERROR: unable to restore CT 100 - unable to parse directory volume name '9.31322574615479e-10'

I tried to restore from the webgui. The other container on that pve restore without any error, only the one I mentioned doesn't work. Backup mode for all container is suspend. If more information is needed pls let me know. Any idea what could be wrong?

Thanks!
Thomas
 
Hello @robbie

and welcome to the Community :). The config of your container would be interesting here. You can display the config of CT 100 with the following command:

Code:
pct config 100
And your Proxmox VE version with "pveversion -v".
 
Hello Mario,

thank you for the welcome. Here's the information you asked for:

Code:
pct config 100

arch: amd64
cores: 2
features: nesting=1
hostname: srv-Data-02-Test
memory: 12288
mp0: data:100/subvol-100-disk-0.subvol,mp=/vol/data,backup=1,size=1
net0: name=eth0,bridge=vmbr0,firewall=0,gw=10.101.66.254,hwaddr=56:BD:8F:64:D1:75,ip=10.101.66.82/24,type=veth
onboot: 1
ostype: debian
parent: Internal_DB_Backup
rootfs: pve:100/subvol-100-disk-0.subvol,size=0T
swap: 0
timezone: host
unprivileged: 1

Code:
pveversion -v

proxmox-ve: 8.1.0 (running kernel: 6.5.13-1-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-11
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
proxmox-kernel-6.5: 6.5.13-1
pve-kernel-5.15.143-1-pve: 5.15.143-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.1
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.5
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.9-2
pve-ha-manager: 4.0.3
pve-i18n: 3.2.0
pve-qemu-kvm: 8.1.5-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve2


Thanks!
Thomas
 
Hi,

I don't know how, but it's working and the extension is
Code:
.subvol
. And why seems the extension weird? What should it look like? Any idea?
 
I don't know where the directory volume name #9.31322574615479e-10' comes from? I can't find anything that's related to that directory volume name?
 
I
What does this mean? Who created this CT & how.

I didn't create the container I only have to administer these container (there are serveral ones) and that's why I don't how this container was created.
I thought it isn't working! Isn't that why you posted this thread?
No, I didn't say that the container isn't working, it's working fine, but for testing backup and restore I had to restore a backup of this container and the restore fails with the error message I mentioned.
Normally it would have either no extension or possibly .raw
AFAIK if it is a ZFS subvolume - it will usually have subvol as the name prefix, but not as an extension.
It's an btrfs subvolume and other container on the pve are configured in the same way and working fine. :confused: