Removed local storage by mistake

Amori

Active Member
May 9, 2013
46
0
26
Hello,

I have couple of backup servers. I was to remove one of these but removed the local storage by mistake.. :(
(Rmoved it from Proxmox UI Datacenter -> Storage)
However all VM's now showing offline on Proxmox UI (but they are still online when I ping the IP's )

proxmox-ve-2.6.32: 3.2-126 (running kernel: 2.6.32-29-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-29-pve: 2.6.32-126
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

So how can I add back the local drive to Proxmox?
 
Hello,

I have couple of backup servers. I was to remove one of these but removed the local storage by mistake.. :(
(Rmoved it from Proxmox UI Datacenter -> Storage)
However all VM's now showing offline on Proxmox UI (but they are still online when I ping the IP's )

proxmox-ve-2.6.32: 3.2-126 (running kernel: 2.6.32-29-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-29-pve: 2.6.32-126
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

So how can I add back the local drive to Proxmox?
Hi,
you only remove the storage entry?
Or do you delete something on the node?
If the VMs are running nothing is lost.

Please post the output from the node, where the "deletet" VMs are running (with the VMIDs):
Code:
cat /etc/pve/storage.cfg
ls /etc/pve/qemu-server/
vgs
lvs
Udo
 
Yes, All vms are there a nd still running. Nothing is lost.

I just got same problem on my second server.

adding several nfs servers to a proxmox server take a lot of time.. so I login via ssh and edit the storage.cfg and add the code. When I save changes. All VMS showing offline now on promox ui.

here is my output

cat /etc/pve/storage.cfg

dir: local
path /var/lib/vz
content images,iso,vztmpl,rootdir
maxfiles 0


nfs: xxxxxxxxxxx
path /mnt/pve/xxxxxxxxxxx
server xxxxxxxxxxxxxxxx
export /home/xxxxxxxxxxx
options vers=4
content backup
maxfiles 1


nfs: move
path /mnt/pvemove
server xxxxxxxxxxxxxxxxxx
export /home/move
options vers=4
content backup
maxfiles 1


nfs: xxxxxxxxxxx
path /mnt/pve/xxxxxxxxxxx
server xxxxxxxxxxxxxxxxxxxxxx
export /home/GSA2048
options vers=4
content backup
maxfiles 1
nfs: backup29
path /mnt/pve/backup
server xxxxxxxxxxxxxxxx
export /home/backup
options vers=5
content backup
maxfiles 1


nfs: xxxxxxxxxxx
path /mnt/pve/xxxxxxxxxxx
server xxxxxxxxxxx
export /home/xxxxxxxxxxx
options vers=4
content backup
maxfiles 1


nfs: backup139
path /mnt/pve/backup139
server xxxxxxxxxxxxxxx

export /home/backup1951
options vers=4
content backup
maxfiles 1
nfs: Backup122
path /mnt/pve/Backuo122
server xxxxxxxxxxxxxx
export /home/backup6
options vers=4
content backup
maxfiles 1


nfs: backup
path /mnt/pve/backup
server xxxxxxxxxxxxxxxxxxxxx
export /home/backup
options vers=4
content backup
maxfiles 1
nfs: ISO
path /mnt/pve/ISO
server xxxxxxxxxxxxxx
export /home/ISO
options vers=4
content iso
maxfiles 1


nfs: BackupAndra
path /mnt/pve/BackupAndra
server xxxxxxxxxxxxxxxxxxxxx
export /home/backupAndra
options vers=4
content backup
maxfiles 1
/etc/pve/qemu-server/
100.conf 105.conf 110.conf 115.conf 123.conf 131.conf 139.conf 147.conf
101.conf 106.conf 111.conf 116.conf 124.conf 134.conf 142.conf 148.conf
102.conf 107.conf 112.conf 117.conf 126.conf 135.conf 144.conf 149.conf
103.conf 108.conf 113.conf 118.conf 128.conf 136.conf 145.conf
104.conf 109.conf 114.conf 121.conf 129.conf 137.conf 146.conf
 
You have one error:
nfs: backup29
path /mnt/pve/backup
server xxxxxxxxxxxxxxxx
export /home/backup
options vers=5
content backup
maxfiles 1

options vers=5 is unsupported

Also you should stick to vers=3 since this is the preferred version in proxmox.
 
You have one error:
nfs: backup29
path /mnt/pve/backup
server xxxxxxxxxxxxxxxx
export /home/backup
options vers=5
content backup
maxfiles 1

options vers=5 is unsupported

Also you should stick to vers=3 since this is the preferred version in proxmox.

I was using version 4 without issues.
Hi,
you only remove the storage entry?
Or do you delete something on the node?
If the VMs are running nothing is lost.

Please post the output from the node, where the "deletet" VMs are running (with the VMIDs):
Code:
cat /etc/pve/storage.cfg
ls /etc/pve/qemu-server/
vgs
lvs
Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!