[SOLVED] Can't see my transfered Backup files

spccat

Member
Nov 3, 2013
19
0
21
Hi there,

I have two backup files:

ls -la /var/lib/vz/dump
-rw-r--r-- 1 root root 41654707562 May 27 08:26 vzdump-lxc-142-2020_05_25-12_55_28.tar.lzo
-rw-r--r-- 1 root root 16495146977 Jul 3 09:21 vzdump-qemu-101-2020_07_01-14_26_49.vma.lzo

and when I go on the web management I cannot see them under local storage. I see only Disk Images, ISO images and Containers.

As well when I try to restore of the qemu server I get following:
qmrestore vzdump-qemu-101-2020_07_01-14_26_49.vma.lzo 901
restore vma archive: lzop -d -c /var/lib/vz/dump/vzdump-qemu-101-2020_07_01-14_26_49.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp10418.fifo - /var/tmp/vzdumptmp10418
CFG: size: 363 name: qemu-server.conf
DEV: dev_id=1 size: 536870912000 devname: drive-ide0
CTIME: Wed Jul 1 09:26:51 2020
no lock found trying to remove 'create' lock
command 'set -o pipefail && lzop -d -c /var/lib/vz/dump/vzdump-qemu-101-2020_07_01-14_26_49.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp10418.fifo - /var/tmp/vzdumptmp10418' failed: storage 'local-lvm' does not exist

I see as well that I don't have a /etc/pve/storage.cfg which I don't understand as there are running already new machines. Shouldn't it exists with the installation of Proxmox?

I run the Virtual Environment 6.2-6 and the server is on the latest buster version. I have no fancy setup, just a single standalone server.

Anyone could help please? It's quite a big file and I don't want to transfer it again (takes ages).

Thanks,
Slarti
 
I see as well that I don't have a /etc/pve/storage.cfg
I would start here. Is the pve-cluster service running? (in the GUI: Node -> System panel)
Do you have a fuse mount in /etc/pve? You can check that with mount | grep /etc/pve. You should get and output like this:
Code:
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
 
I would start here. Is the pve-cluster service running? (in the GUI: Node -> System panel)
Yes.
Do you have a fuse mount in /etc/pve? You can check that with mount | grep /etc/pve. You should get and output like this:
Code:
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
I get the same.

*scratch head*
 
Do you have any files in /etc/pve?

Did you, at some point, remove the /etc/pve/storage.cfg file? Maybe there is something in the bash history?
 
Do you have any files in /etc/pve?

Did you, at some point, remove the /etc/pve/storage.cfg file? Maybe there is something in the bash history?

Yes, I have.

Now the most funniest thing happened and I don't know why. I just cleaned up the failed-created files from the restore.

and the storage.cfg showed up:
Code:
-rw-r----- 1 root www-data  451 Jul 12 13:48 authkey.pub
-rw-r----- 1 root www-data  451 Jul 12 13:48 authkey.pub.old
-r--r----- 1 root www-data 6428 Jan  1  1970 .clusterlog
-rw-r----- 1 root www-data    2 Jan  1  1970 .debug
drwxr-xr-x 2 root www-data    0 May 27 11:59 firewall
drwxr-xr-x 2 root www-data    0 May 25 13:47 ha
lrwxr-xr-x 1 root www-data    0 Jan  1  1970 local -> nodes/Milliways
lrwxr-xr-x 1 root www-data    0 Jan  1  1970 lxc -> nodes/Milliways/lxc
-r--r----- 1 root www-data   42 Jan  1  1970 .members
drwxr-xr-x 2 root www-data    0 May 25 13:47 nodes
lrwxr-xr-x 1 root www-data    0 Jan  1  1970 openvz -> nodes/Milliways/openvz
drwx------ 2 root www-data    0 May 25 13:47 priv
-rw-r----- 1 root www-data 2074 May 25 13:47 pve-root-ca.pem
-rw-r----- 1 root www-data 1679 May 25 13:47 pve-www.key
lrwxr-xr-x 1 root www-data    0 Jan  1  1970 qemu-server -> nodes/Milliways/qemu-server
-r--r----- 1 root www-data 1354 Jan  1  1970 .rrd
drwxr-xr-x 2 root www-data    0 May 25 13:47 sdn
-rw-r----- 1 root www-data  103 Jul 13 11:44 storage.cfg
-r--r----- 1 root www-data  656 Jan  1  1970 .version
drwxr-xr-x 2 root www-data    0 May 25 13:46 virtual-guest
-r--r----- 1 root www-data  588 Jan  1  1970 .vmlist
-rw-r----- 1 root www-data  119 May 25 13:47 vzdump.cron
root@PM /etc/pve # more storage.cfg 
dir: local
        path /var/lib/vz
        content backup,iso,rootdir,snippets,vztmpl,images
        maxfiles 0
        shared 0

root@PM /etc/pve # date
Mon 13 Jul 2020 11:57:01 AM CEST
root@PM /etc/pve #

I guess this file was created when I found out why my backups were not showing. Anyway, I tried to restore again and sadly still not working:

Code:
restore vma archive: lzop -d -c /var/lib/vz/dump/vzdump-qemu-101-2020_07_01-14_26_49.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp17224.fifo - /var/tmp/vzdumptmp17224
CFG: size: 363 name: qemu-server.conf
DEV: dev_id=1 size: 536870912000 devname: drive-ide0
CTIME: Wed Jul  1 09:26:51 2020
no lock found trying to remove 'create'  lock
TASK ERROR: command 'set -o pipefail && lzop -d -c /var/lib/vz/dump/vzdump-qemu-101-2020_07_01-14_26_49.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp17224.fifo - /var/tmp/vzdumptmp17224' failed: storage 'local-lvm' does not exist

I hope you can identify the problem.
 
Well, when you removed the storage.cfg PVE did fall back to the very basic default. Which is the one that was now created again. You will have to recreate the Thin LVM storage with the name local-lvm. You can do so via the GUI. The volume group should be PVE and the should be called data.

Once you recreated that storage, PVE should know about it and be able to restore the backup.
 
Well, when you removed the storage.cfg PVE did fall back to the very basic default. Which is the one that was now created again. You will have to recreate the Thin LVM storage with the name local-lvm. You can do so via the GUI. The volume group should be PVE and the should be called data.

Once you recreated that storage, PVE should know about it and be able to restore the backup.

First, thanks Aaron for your help.
I went through the history of my shell and couldn't find any trace of me deleting the storage.cfg ... as well I have 8 qemu server running and one lze with no troubles.

So went on the GUI and clicked the node and "Disks/LVM-Thin" and this is what happened:

Disks-Thin.png

create Thinpool.png


He can't find a Disk.

Disks.png
I have these disks.

Or am I at the wrong place?
 
You are in the wrong place ;)

Datacenter (top item in the tree view left) -> Storage.
 
Sorry, but somehow it's borked:
add lvm-thin.png

I click on Volume group and nothing happens and as well Thin Pool. Specifying the Nodes doesn't change a thing.
 
What is the output of the following commands on that node?
  • pvs
  • vgs
  • lvs
  • lsblk
Please copy and paste the output inside [code][/code] tags.
 
Code:
root@PM ~ # pvs
root@PM ~ # vgs
root@PM ~ # lvs
root@PM ~ # lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
loop0     7:0    0  500G  0 loop
sda       8:0    0  1.8T  0 disk
├─sda1    8:1    0   32G  0 part
│ └─md0   9:0    0   32G  0 raid1 [SWAP]
├─sda2    8:2    0  512M  0 part
│ └─md1   9:1    0  511M  0 raid1 /boot
└─sda3    8:3    0  1.8T  0 part
  └─md2   9:2    0  1.8T  0 raid1 /
sdb       8:16   0  1.8T  0 disk
├─sdb1    8:17   0   32G  0 part
│ └─md0   9:0    0   32G  0 raid1 [SWAP]
├─sdb2    8:18   0  512M  0 part
│ └─md1   9:1    0  511M  0 raid1 /boot
└─sdb3    8:19   0  1.8T  0 part
  └─md2   9:2    0  1.8T  0 raid1 /
so besides lsblk, there are no outputs.
 
May I ask what kind of machine this is? It's using MD Raid which we do not recommend.

If the `local` storage is the only one you have, and the `lsblk` indicates this, the restore might work if you add the --storage local option to the qmrestore command.
 
Well that's it:
Code:
# qmrestore vzdump-qemu-101-2020_07_01-14_26_49.vma.lzo 800 --storage local
restore vma archive: lzop -d -c /var/lib/vz/dump/vzdump-qemu-101-2020_07_01-14_26_49.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp17228.fifo - /var/tmp/vzdumptmp17228
CFG: size: 363 name: qemu-server.conf
DEV: dev_id=1 size: 536870912000 devname: drive-ide0
CTIME: Wed Jul  1 09:26:51 2020
Formatting '/var/lib/vz/images/800/vm-800-disk-0.raw', fmt=raw size=536870912000
new volume ID is 'local:800/vm-800-disk-0.raw'
map 'drive-ide0' to '/var/lib/vz/images/800/vm-800-disk-0.raw' (write zeros = 0)
progress 1% (read 5368709120 bytes, duration 19 sec)
progress 2% (read 10737418240 bytes, duration 20 sec)
progress 3% (read 16106127360 bytes, duration 20 sec)
:
:
progress 99% (read 531502202880 bytes, duration 413 sec)
progress 100% (read 536870912000 bytes, duration 425 sec)
total bytes read 536870912000, sparse bytes 504320888832 (93.9%)
space reduction due to 4K zero blocks 4.68%
rescan volumes...

to come to the point, after changing the CD in the "drive", it started up and it runs.

Thank you so much Aaron for your quick help.
 
Please be so kind and mark the thread as solved. To do so, edit the first post and choose `solved` in the drop down field next to the title.
Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!