No VM disk

gtrovato

Member
May 16, 2019
29
1
23
55
Hi All,

I've added a LVM group on my PVE, but from the WEB UI I don't see the VM disks inside it, that I can see with fdisk as below.
What's the issue?
Thank you!

Disk /dev/mapper/pve--OLD--95683659-vm--110--disk--0: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0xb319051b

Device Boot Start End Sectors Size Id Type
/dev/mapper/pve--OLD--95683659-vm--110--disk--0-part1 * 2048 201328639 201326592 96G 83 Linux
/dev/mapper/pve--OLD--95683659-vm--110--disk--0-part2 201330686 209713151 8382466 4G 5 Extende
/dev/mapper/pve--OLD--95683659-vm--110--disk--0-part5 201330688 209713151 8382464 4G 82 Linux s
 
Hello
Do you mean you added the VG at Node > Disks > LVM? Did you check the checkbox "Add Storage"?

Can you give me the output of lsblk, lvs, vgs and pvesm status?
 
Hi Philipp,

thanks for answering!
Here:

root@pve:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 2.7T 0 disk
sdb 8:16 0 2.7T 0 disk
sdc 8:32 0 149.1G 0 disk
├─sdc1 8:33 0 1007K 0 part
├─sdc2 8:34 0 1G 0 part
└─sdc3 8:35 0 148G 0 part
├─pve-swap 253:4 0 8G 0 lvm [SWAP]
├─pve-root 253:5 0 47G 0 lvm /
├─pve-data_tmeta 253:6 0 1G 0 lvm
│ └─pve-data-tpool 253:8 0 75G 0 lvm
│ ├─pve-data 253:27 0 75G 1 lvm
│ └─pve-vm--200--disk--0 253:28 0 10G 0 lvm
└─pve-data_tdata 253:7 0 75G 0 lvm
└─pve-data-tpool 253:8 0 75G 0 lvm
├─pve-data 253:27 0 75G 1 lvm
└─pve-vm--200--disk--0 253:28 0 10G 0 lvm
sdd 8:48 0 931.5G 0 disk
├─sdd1 8:49 0 1007K 0 part
├─sdd2 8:50 0 512M 0 part
└─sdd3 8:51 0 931G 0 part
├─pve--OLD--95683659-swap 253:0 0 8G 0 lvm
├─pve--OLD--95683659-root 253:1 0 96G 0 lvm
├─pve--OLD--95683659-data_tmeta 253:2 0 8.1G 0 lvm
│ └─pve--OLD--95683659-data-tpool 253:9 0 794.8G 0 lvm
│ ├─pve--OLD--95683659-data 253:10 0 794.8G 1 lvm
│ ├─pve--OLD--95683659-vm--100--disk--0 253:11 0 32G 0 lvm
│ ├─pve--OLD--95683659-vm--110--disk--0 253:12 0 100G 0 lvm
│ ├─pve--OLD--95683659-vm--112--disk--0 253:13 0 50G 0 lvm
│ ├─pve--OLD--95683659-vm--106--disk--0 253:14 0 200G 0 lvm
│ ├─pve--OLD--95683659-vm--109--disk--0 253:15 0 50G 0 lvm
│ ├─pve--OLD--95683659-vm--112--state--before_ntp_solve_issue 253:16 0 16.5G 0 lvm
│ ├─pve--OLD--95683659-vm--118--disk--0 253:17 0 500G 0 lvm
│ ├─pve--OLD--95683659-vm--120--disk--0 253:18 0 100G 0 lvm
│ ├─pve--OLD--95683659-vm--109--state--before_g729 253:19 0 4.5G 0 lvm
│ ├─pve--OLD--95683659-vm--222--disk--0 253:20 0 100G 0 lvm
│ ├─pve--OLD--95683659-vm--206--disk--0 253:21 0 1G 0 lvm
│ ├─pve--OLD--95683659-vm--105--disk--0 253:22 0 100G 0 lvm
│ ├─pve--OLD--95683659-vm--115--disk--0 253:23 0 50G 0 lvm
│ ├─pve--OLD--95683659-vm--222--state--before_italian_audio_file 253:24 0 4.5G 0 lvm
│ └─pve--OLD--95683659-vm--206--disk--1 253:25 0 32G 0 lvm
├─pve--OLD--95683659-data_tdata 253:3 0 794.8G 0 lvm
│ └─pve--OLD--95683659-data-tpool 253:9 0 794.8G 0 lvm
│ ├─pve--OLD--95683659-data 253:10 0 794.8G 1 lvm
│ ├─pve--OLD--95683659-vm--100--disk--0 253:11 0 32G 0 lvm
│ ├─pve--OLD--95683659-vm--110--disk--0 253:12 0 100G 0 lvm
│ ├─pve--OLD--95683659-vm--112--disk--0 253:13 0 50G 0 lvm
│ ├─pve--OLD--95683659-vm--106--disk--0 253:14 0 200G 0 lvm
│ ├─pve--OLD--95683659-vm--109--disk--0 253:15 0 50G 0 lvm
│ ├─pve--OLD--95683659-vm--112--state--before_ntp_solve_issue 253:16 0 16.5G 0 lvm
│ ├─pve--OLD--95683659-vm--118--disk--0 253:17 0 500G 0 lvm
│ ├─pve--OLD--95683659-vm--120--disk--0 253:18 0 100G 0 lvm
│ ├─pve--OLD--95683659-vm--109--state--before_g729 253:19 0 4.5G 0 lvm
│ ├─pve--OLD--95683659-vm--222--disk--0 253:20 0 100G 0 lvm
│ ├─pve--OLD--95683659-vm--206--disk--0 253:21 0 1G 0 lvm
│ ├─pve--OLD--95683659-vm--105--disk--0 253:22 0 100G 0 lvm
│ ├─pve--OLD--95683659-vm--115--disk--0 253:23 0 50G 0 lvm
│ ├─pve--OLD--95683659-vm--222--state--before_italian_audio_file 253:24 0 4.5G 0 lvm
│ └─pve--OLD--95683659-vm--206--disk--1 253:25 0 32G 0 lvm
└─pve--OLD--95683659-grubtemp 253:26 0 4M 0 lvm
root@pve:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 75.03g 1.77 1.64
root pve -wi-ao---- 47.01g
swap pve -wi-ao---- 8.00g
vm-200-disk-0 pve Vwi-aotz-- 10.00g data 13.26
data pve-OLD-95683659 twi-aotz-- <794.79g 23.94 1.45
grubtemp pve-OLD-95683659 -wi-a----- 4.00m
root pve-OLD-95683659 -wi-a----- 96.00g
snap_vm-105-disk-0_before_influxdb pve-OLD-95683659 Vri---tz-k 100.00g data vm-105-disk-0
snap_vm-105-disk-0_before_rbe pve-OLD-95683659 Vri---tz-k 100.00g data vm-105-disk-0
snap_vm-109-disk-0_before_g729 pve-OLD-95683659 Vri---tz-k 50.00g data vm-109-disk-0
snap_vm-109-disk-0_before_video pve-OLD-95683659 Vri---tz-k 50.00g data
snap_vm-109-disk-0_clean_installation_with_mc_ssh pve-OLD-95683659 Vri---tz-k 50.00g data
snap_vm-109-disk-0_now_16_09_2020 pve-OLD-95683659 Vri---tz-k 50.00g data
snap_vm-109-disk-0_now_29_1_2020 pve-OLD-95683659 Vri---tz-k 50.00g data
snap_vm-109-disk-0_sccp_enabled pve-OLD-95683659 Vri---tz-k 50.00g data
snap_vm-112-disk-0_before_new_pfB pve-OLD-95683659 Vri---tz-k 50.00g data vm-112-disk-0
snap_vm-112-disk-0_before_ntp_solve_issue pve-OLD-95683659 Vri---tz-k 50.00g data vm-112-disk-0
snap_vm-118-disk-0_before_hacs pve-OLD-95683659 Vri---tz-k 500.00g data vm-118-disk-0
snap_vm-118-disk-0_before_removing_nm pve-OLD-95683659 Vri---tz-k 500.00g data vm-118-disk-0
snap_vm-118-disk-0_before_updating pve-OLD-95683659 Vri---tz-k 500.00g data vm-118-disk-0
snap_vm-222-disk-0_before_italian_audio_file pve-OLD-95683659 Vri---tz-k 100.00g data vm-222-disk-0
swap pve-OLD-95683659 -wi-a----- 8.00g
vm-100-disk-0 pve-OLD-95683659 Vwi-a-tz-- 32.00g data 60.56
vm-105-disk-0 pve-OLD-95683659 Vwi-a-tz-- 100.00g data 28.96
vm-106-disk-0 pve-OLD-95683659 Vwi-a-tz-- 200.00g data 4.50
vm-109-disk-0 pve-OLD-95683659 Vwi-a-tz-- 50.00g data snap_vm-109-disk-0_now_29_1_2020 23.70
vm-109-state-before_g729 pve-OLD-95683659 Vwi-a-tz-- <4.49g data 38.50
vm-110-disk-0 pve-OLD-95683659 Vwi-a-tz-- 100.00g data 3.41
vm-112-disk-0 pve-OLD-95683659 Vwi-a-tz-- 50.00g data 100.00
vm-112-state-before_ntp_solve_issue pve-OLD-95683659 Vwi-a-tz-- <16.49g data 6.72
vm-115-disk-0 pve-OLD-95683659 Vwi-a-tz-- 50.00g data 1.53
vm-118-disk-0 pve-OLD-95683659 Vwi-a-tz-- 500.00g data 3.30
vm-120-disk-0 pve-OLD-95683659 Vwi-a-tz-- 100.00g data 16.34
vm-206-disk-0 pve-OLD-95683659 Vwi-a-tz-- 1.00g data 0.00
vm-206-disk-1 pve-OLD-95683659 Vwi-a-tz-- 32.00g data 3.05
vm-222-disk-0 pve-OLD-95683659 Vwi-a-tz-- 100.00g data 8.43
vm-222-state-before_italian_audio_file pve-OLD-95683659 Vwi-a-tz-- <4.49g data 18.52
root@pve:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 4 0 wz--n- <148.05g 16.00g
pve-OLD-95683659 1 33 0 wz--n- <931.01g <15.99g
root@pve:~# pvesm status
Name Type Status Total Used Available %
local dir active 48206000 3089212 42635636 6.41%
local-lvm lvmthin active 78675968 1392564 77283403 1.77%
old lvm active 976232448 959467520 16764928 98.28%
pve lvm active 155238400 138457088 16781312 89.19%
root@pve:~#
 
Ok sorry. I still do not entirely understand the problem. Can you please be more specific what exact output (vm, disk) are you expecting in what GUI window?
 
Not sure if this is your problem but on the screenshot you are looking at Container, not VM Disks.
 
Strange. You should be able to see it. Can i have a look at your /etc/pve/storage.cfg?
 
dir: local
path /var/lib/vz
content backup,vztmpl,iso

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

lvm: old
vgname pve-OLD-95683659
content rootdir,images
shared 0

lvm: pve
vgname pve
content images,rootdir
nodes pve
shared 0
 
Hmmm... are you using thin provision? the disks have the attributs Vwi-a-tz-- where the 't' on position 7 stands for "thin"

If you are using thin provision, you have to add it as 'lvmthin' in your storage.conf (same as your local-lvm), not as 'lvm'.
 
Great Philipp,

I see the disks now!
Just another hint: how to have the VMs back?
Just create new VMs and attach the corresponding disk?

Thank you!
 
That highly depends on the configuration of your VMs. At the very least you have to edit the 'boot' option in /etc/pve/qemu-server/<vmid>.conf to add the scsi device to the boot order. If you have a complex RAID configuration it might be more tricky.

EDIT: When you create new VMs with the same id then the old VMs had, you can use qm rescan to auto assign the discs to the (hopefully) correct VMs
 
Last edited:
Also when your Issue is resolved, please don't forget to mark the thread as resolved so people find it easier in the future.
 
Great gtrovato,

It might be possible though a bit tricky to recover the vm configurations from your original pve. If they where very complex this might be worth it. Otherwise you probably are faster by recreating.

Tell me if you want to attempt a recovery, so I can assist.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!