storage 'local-lvm' does not exists ( backup at least )

Kimmo H

Member
Feb 19, 2019
31
1
13
50
Hi,
i red something similar problems from Forum but there was not any real solutions.

I have 2 physical servers testing setup, first one installed from proxmox .iso, second one from Debian 9 + proxmox apt repository. 1st one has Proxmox Mail Gateway VM and 1 webserver VM, second server have none.

Now i'm having email messages every night when backup starts, same thing if i run it manually:
# vzdump 100 --compress lzo --mode snapshot --storage local --mailto info@xxxxxx --mailnotification failure --node px-ve1

vzdump backup status (px-ve1.XXX) : backup failed
100: 2019-04-02 00:05:04 ERROR: Backup of VM 100 failed - storage 'local-lvm' does not exists

Does this something to do that i added these 2 servers to cluster and change second empty servers extra disk to lvmthin: sdb-lvm-thin ?

Does server #2 /etc/pve/storage.cfg info corosync somehow also server to #1 /etc/pve/storage.cfg ??? Seems like it but WHY ? Does servers in same cluster have to be identical ?

I'm afraid to reboot servers, those VM's propably doesn't start anymore, but they are happily running at the moment.

Any ideas what to do / check ? Glad im just testing Proxmox.
 
Sorry, new to Proxmox, coming from Xenserver..

root@px-ve1:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,iso,backup

nfs: backups
export /USBDisk1
path /mnt/pve/backups
server nfs1-innopoli.XXXXX.fi
content backup
maxfiles 3
options vers=3

nfs: iso-images
export /NFS
path /mnt/pve/iso-images
server nfs1-innopoli.XXXXX.fi
content iso
options vers=3

lvmthin: sdb-lvm-thin
thinpool sdb-lvm-thin
vgname sdb-lvm-thin
content images,rootdir
nodes px-ve2

root@px-ve1:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 5 0 wz--n- 930.98g 15.99g

root@px-ve1:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 794.76g 3.94 0.23
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi-aotz-- 32.00g data 20.87
vm-101-disk-0 pve Vwi-aotz-- 128.00g data 19.27
root@px-ve1:~#

----------------------------------------------------------------------------

root@px-ve2:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,iso,backup

nfs: backups
export /USBDisk1
path /mnt/pve/backups
server nfs1-innopoli.XXXXX.fi
content backup
maxfiles 3
options vers=3

nfs: iso-images
export /NFS
path /mnt/pve/iso-images
server nfs1-innopoli.XXXXX.fi
content iso
options vers=3

lvmthin: sdb-lvm-thin
thinpool sdb-lvm-thin
vgname sdb-lvm-thin
content images,rootdir
nodes px-ve2

root@px-ve2:~# vgs
VG #PV #LV #SN Attr VSize VFree
sdb-lvm-thin 1 1 0 wz--n- 3.64t 0

root@px-ve2:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
sdb-lvm-thin sdb-lvm-thin twi-a-tz-- 3.64t 0.00 0.41
root@px-ve2:~#
 
Your VG with the VM disk images on node 1 is 'pve', but you have no reference to that in the config.
You need to add something like this to your config:
Code:
lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images
        nodes px-ve1
It is ok to modify the storage config, but you need to make sure that it remains consistent with your setup on all nodes as it is shared via the cluster filesystem.
 
Hi, and Chris thank you, i added those lines to /etc/pve/storage.cfg and made succesfully backups manually:

root@px-ve1:~# vzdump 100 101 --compress lzo --mode snapshot --storage local --mailto info@XXXXX.fi --mailnotification failure --node px-ve1

Will see how it runs from cron next night, will it go to NFS backup share.

So basically in /etc/pve/storage.cfg you need to have every cluster node storages, also local, no matter they are not shared between cluster nodes ?
 
So basically in /etc/pve/storage.cfg you need to have every cluster node storages, also local, no matter they are not shared between cluster nodes ?
As mentioned, everything within /etc/pve is shared for the whole cluster. Especially the configuration of the storage in /etc/pve/storage.cfg. The node specific configurations are under /etc/pve/nodes/<nodename>/ (still shared cluster wide). We treat storage as global as this makes more sense in a cluster, especially if you consider features like VM and storage migration where every node in the cluster needs to know about the storage configuration of every other node.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!