[SOLVED] Cannot do restore from one node to another in a cluster.

sherbmeister

Member
Dec 7, 2022
49
3
8
So I've got three nodes, when I try to restore a Container from node to node, I get this error:

recovering backed-up configuration from 'local:backup/vzdump-lxc-556-2023_11_29-18_01_03.tar.zst'
mounting container failed
TASK ERROR: unable to restore CT 305 - cannot open directory //rpool/ROOT/pve-1/subvol-305-disk-0: No such file or directory

Anything I can do to make this work? Thanks
 
//rpool/ROOT/pve-1/subvol-305-disk-0
That path looks very wrong. First the "//" and second the LXCs dataset is stored on top of your root dataset ("pve-1").

Whats the output of pvesm status and cat /etc/pve/storage.cfg on the node you are trying to restore to?
 
Code:
root@newton:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

dir: storage2
        path /mnt/pve/storage2
        content backup,snippets,vztmpl,iso,rootdir,images
        is_mountpoint 1
        nodes yautja
        shared 0

dir: NVME
        path /mnt/pve/NVME
        content backup,snippets,vztmpl,iso,rootdir,images
        is_mountpoint 1
        nodes yautja
        shared 0

dir: storage3
        path /mnt/pve/storage3
        content vztmpl,iso,images,rootdir,backup,snippets
        is_mountpoint 1
        nodes yautja
        shared 0

lvm: auriga-storage
        vgname auriga-storage
        content images,rootdir
        nodes auriga
        shared 0

zfspool: local-zfs
        pool rpool/ROOT/pve-1
        content rootdir,images
        nodes auriga
        sparse 0

zfspool: local-zfs-newt
        pool rpool/ROOT/pve-1
        content images,rootdir
        nodes newton
        sparse 0

lvm: local-lvm-yautja
        vgname pve
        content images,rootdir
        nodes yautja
        shared 0

cifs: ftp-yautja
        path /mnt/pve/ftp-yautja
        server 192.168.69.239
        share NAS
        content backup,vztmpl
        prune-backups keep-all=1
        username marius

cifs: ftp-newton
        path /mnt/pve/ftp-newton
        server 192.168.69.7
        share server-backups
        content images,rootdir,vztmpl,snippets,backup
        prune-backups keep-all=1
        username marius

cifs: ISO
        path /mnt/pve/ISO
        server 192.168.69.7
        share isos
        content iso
        prune-backups keep-all=1
        username marius
 

Attachments

  • SS1.png
    SS1.png
    22.5 KB · Views: 3
zfspool: local-zfs pool rpool/ROOT/pve-1 content rootdir,images nodes auriga sparse 0 zfspool: local-zfs-newt pool rpool/ROOT/pve-1 content images,rootdir nodes newton sparse 0
"local-zfs" usually points to rpool/data and not to rpool/ROOT/pve-1 (which is mounted to "/" as its the root filesystem).
 
you cannot restore anything the node doesnt have access to. you're telling it to use the share local, but the payload exists solely on the original machine.

copy the vzdump file to a shared location.
 
you cannot restore anything the node doesnt have access to. you're telling it to use the share local, but the payload exists solely on the original machine.

copy the vzdump file to a shared location.
I tried that, using WinSCP, I copied it from node to node but I get the same error. I can only restore it on my shared nas storage but I'd like some of them to run on SSD storage which is giving me error.
 
I see.

so you can SEE the file for restoration. when you try to restore it to your local store, it fails.

post the output of vzdump task (including the failure.)
Code:
INFO: starting new backup job: vzdump 556 --node yautja --compress zstd --remove 0 --notes-template '{{guestname}}' --mode snapshot --storage local
INFO: Starting Backup of VM 556 (lxc)
INFO: Backup started at 2023-11-29 17:27:50
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: website
INFO: including mount point rootfs ('/') in backup
INFO: creating vzdump archive '/var/lib/vz/dump/vzdump-lxc-556-2023_11_29-18_01_03.tar.zst'
INFO: Total bytes written: 7277793280 (6.8GiB, 107MiB/s)
INFO: archive file size: 1.46GB
INFO: adding notes to backup
INFO: Finished Backup of VM 556 (00:01:07)
INFO: Backup finished at 2023-11-29 17:28:57
INFO: Backup job finished successfully
TASK OK

Code:
recovering backed-up configuration from 'local:backup/vzdump-lxc-556-2023_11_29-18_01_03.tar.zst'
mounting container failed
TASK ERROR: unable to restore CT 305 - cannot open directory //rpool/ROOT/pve-1/subvol-305-disk-0: No such file or directory
 
also pvesm status
Code:
root@newton:~# pvesm status
Name                    Type     Status           Total            Used       Available        %
ISO                     cifs     active     27943053056      1649619584     26293433472    5.90%
NVME                     dir   disabled               0               0               0      N/A
auriga-storage           lvm   disabled               0               0               0      N/A
ftp-newton              cifs     active     27943053056      1649619584     26293433472    5.90%
ftp-yautja              cifs     active      8215929088             640      8215928448    0.00%
local                    dir     active       172280960         3376640       168904320    1.96%
local-lvm-yautja         lvm   disabled               0               0               0      N/A
local-zfs            zfspool   disabled               0               0               0      N/A
local-zfs-newt       zfspool     active       225532048        56627676       168904372   25.11%
storage2                 dir   disabled               0               0               0      N/A
storage3                 dir   disabled               0               0               0      N/A
root@newton:~#
 
this is weird.

pvesm scan zfs
the local-zfs-newt section of /etc/pve/storage.cfg
Code:
root@newton:~# pvesm scan zfs
rpool
rpool/ROOT
rpool/ROOT/pve-1
rpool/data

Code:
zfspool: local-zfs
        pool rpool/ROOT/pve-1
        content rootdir,images
        nodes auriga
        sparse 0

zfspool: local-zfs-newt
        pool rpool/ROOT/pve-1
        content images,rootdir
        nodes newton
        sparse 0
 
Last edited:
here's what I suggest.

remove the entry for local-zfs-newt. it serves no purpose. If you're ok using zfs-local, use that, and make sure that the "nodes" entry reflects THE NAME OF YOUR HOST.all hosts that have an rpool/ROOT/pve-1 entry.

If you prefer a a seperate namespace (I would), create a new subvol under root, and select it as your pool in a new storage entry.
 
Last edited:
  • Like
Reactions: sherbmeister
here's what I suggest.

remove the entry for local-zfs-newt. it serves no purpose. If you're ok using zfs-local, use that, and make sure that the "nodes" entry reflects THE NAME OF YOUR HOST.all hosts that have an rpool/ROOT/pve-1 entry.

If you prefer a a seperate namespace (I would), create a new subvol under root, and select it as your pool in a new storage entry.
remove local-zfs-newt from storage.conf or destroy it? wouldn't I lose all my data?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!