Logical volume error after moving hard disks to a different server

maretha

New Member
Dec 19, 2023
2
0
1
Good day All,

Please bear with me, I am a newbie to proxmox. Recently we had proxmox running with various containers and VMs, however, the physical server's motherboard was damaged and we had to move hard disks to a different server. After the move, I am having issues with starting this VMS because of the missing logical volume group. Please guide me through on how to resolve this. The error message I am getting when starting one VM is as follows and all the other details are below as well

"
Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
TASK ERROR: no such logical volume VD1/VD1"

The logs
"
Dec 20 00:00:26 pve5 smartd[972]: Device: /dev/sda [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 78 to 79
Dec 20 00:00:26 pve5 spiceproxy[1347]: worker exit
Dec 20 00:00:26 pve5 spiceproxy[1346]: worker 1347 finished
Dec 20 00:00:27 pve5 pveproxy[23796]: worker exit
Dec 20 00:00:27 pve5 pveproxy[23794]: worker 23796 finished
Dec 20 00:00:27 pve5 pveproxy[23794]: worker 23797 finished
Dec 20 00:00:27 pve5 pveproxy[23794]: worker 23795 finished
Dec 20 00:00:27 pve5 smartd[972]: Device: /dev/sdc [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 222 to 214
Dec 20 00:00:28 pve5 pvestatd[1306]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Dec 20 00:00:28 pve5 pvestatd[1306]: no such logical volume VD1/VD1
Dec 20 00:00:29 pve5 pveproxy[24563]: worker exit
Dec 20 00:00:29 pve5 pveproxy[24562]: worker exit
Dec 20 00:00:38 pve5 pvestatd[1306]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Dec 20 00:00:38 pve5 pvestatd[1306]: no such logical volume VD1/VD1
Dec 20 00:00:47 pve5 pvestatd[1306]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Dec 20 00:00:47 pve5 pvestatd[1306]: no such logical volume VD1/VD1
Dec 20 00:00:58 pve5 pvestatd[1306]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Dec 20 00:00:58 pve5 pvestatd[1306]: no such logical volume VD1/VD1
Dec 20 00:01:02 pve5 pvescheduler[24678]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Dec 20 00:01:07 pve5 pvestatd[1306]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Dec 20 00:01:07 pve5 pvestatd[1306]: no such logical volume VD1/VD1
Dec 20 00:01:18 pve5 pvestatd[1306]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Dec 20 00:01:18 pve5 pvestatd[1306]: no such logical volume VD1/VD1
Dec 20 00:01:27 pve5 pvestatd[1306]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Dec 20 00:01:27 pve5 pvestatd[1306]: no such logical volume VD1/VD1
Dec 20 00:01:38 pve5 pvestatd[1306]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Dec 20 00:01:38 pve5 pvestatd[1306]: no such logical volume VD1/VD1
"


Dec 20 00:01:38 pve5 pvestatd[1306]: no such logical volume VD1/VD1

root@pve5:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb3 pve lvm2 a-- <232.25g 16.00g
/dev/sdc2 cs lvm2 a-- <930.51g 0

root@pve5:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 30G 0 loop
sda 8:0 0 931.5G 0 disk
└─sda1 8:1 0 465.2G 0 part /mnt/pve/HDD1
sdb 8:16 0 465.8G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 1G 0 part
└─sdb3 8:19 0 232.2G 0 part
├─pve-swap 253:3 0 8G 0 lvm [SWAP]
├─pve-root 253:4 0 68.1G 0 lvm /
├─pve-data_tmeta 253:5 0 1.4G 0 lvm
│ └─pve-data-tpool 253:7 0 137.4G 0 lvm
│ ├─pve-data 253:8 0 137.4G 1 lvm
│ ├─pve-vm--102--disk--0 253:9 0 8G 0 lvm
│ ├─pve-vm--204--disk--0 253:10 0 8G 0 lvm
│ ├─pve-vm--115--disk--0 253:11 0 12G 0 lvm
│ ├─pve-vm--501--disk--0 253:12 0 16G 0 lvm
│ └─pve-vm--503--disk--0 253:13 0 22G 0 lvm
└─pve-data_tdata 253:6 0 137.4G 0 lvm
└─pve-data-tpool 253:7 0 137.4G 0 lvm
├─pve-data 253:8 0 137.4G 1 lvm
├─pve-vm--102--disk--0 253:9 0 8G 0 lvm
├─pve-vm--204--disk--0 253:10 0 8G 0 lvm
├─pve-vm--115--disk--0 253:11 0 12G 0 lvm
├─pve-vm--501--disk--0 253:12 0 16G 0 lvm
└─pve-vm--503--disk--0 253:13 0 22G 0 lvm
sdc 8:32 0 931.5G 0 disk
├─sdc1 8:33 0 1G 0 part
└─sdc2 8:34 0 930.5G 0 part
├─cs-swap 253:0 0 11.8G 0 lvm
├─cs-home 253:1 0 848.7G 0 lvm
└─cs-root 253:2 0 70G 0 lvm

root@pve5:~# pvdisplay
--- Physical volume ---
PV Name /dev/sdc2
VG Name cs
PV Size 930.51 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238210
Free PE 0
Allocated PE 238210
PV UUID ek5Kq5-isIi-Fboq-tv26-XGBQ-QjbC-kOmiMo

--- Physical volume ---
PV Name /dev/sdb3
VG Name pve
PV Size <232.25 GiB / not usable 2.98 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 59455
Free PE 4096
Allocated PE 55359
PV UUID Kj4lac-ohlg-6dph-efKY-5ijt-HIeg-bfu4KK



root@pve5:~# vgdisplay
--- Volume group ---
VG Name cs
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <930.51 GiB
PE Size 4.00 MiB
Total PE 238210
Alloc PE / Size 238210 / <930.51 GiB
Free PE / Size 0 / 0
VG UUID NTTWQc-QO36-sNTe-2X4E-ajl2-Fudy-1TbjBc

--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 107
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 12
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <232.25 GiB
PE Size 4.00 MiB
Total PE 59455
Alloc PE / Size 55359 / <216.25 GiB
Free PE / Size 4096 / 16.00 GiB
VG UUID BGl7Sj-D6eH-zJvP-2cAs-gYBV-UCJJ-g8RY8p


root@pve5:~# pvscan
PV /dev/sdc2 VG cs lvm2 [<930.51 GiB / 0 free]
PV /dev/sdb3 VG pve lvm2 [<232.25 GiB / 16.00 GiB free]
Total: 2 [<1.14 TiB] / in use: 2 [<1.14 TiB] / in no VG: 0 [0 ]
root@pve5:~# vgscan
Found volume group "cs" using metadata type lvm2
Found volume group "pve" using metadata type lvm2
root@pve5:~# lvscan
ACTIVE '/dev/cs/swap' [<11.82 GiB] inherit
ACTIVE '/dev/cs/home' [848.69 GiB] inherit
ACTIVE '/dev/cs/root' [70.00 GiB] inherit
ACTIVE '/dev/pve/swap' [8.00 GiB] inherit
ACTIVE '/dev/pve/root' [68.06 GiB] inherit
ACTIVE '/dev/pve/data' [<137.38 GiB] inherit
ACTIVE '/dev/pve/vm-102-disk-0' [8.00 GiB] inherit
ACTIVE '/dev/pve/vm-204-disk-0' [8.00 GiB] inherit
inactive '/dev/pve/snap_vm-115-disk-0_Snap1' [12.00 GiB] inherit
inactive '/dev/pve/snap_vm-115-disk-0_Snap2' [12.00 GiB] inherit
ACTIVE '/dev/pve/vm-115-disk-0' [12.00 GiB] inherit
ACTIVE '/dev/pve/vm-501-disk-0' [16.00 GiB] inherit
inactive '/dev/pve/snap_vm-503-disk-0_PreInstall' [16.00 GiB] inherit
ACTIVE '/dev/pve/vm-503-disk-0' [22.00 GiB] inherit
inactive '/dev/pve/snap_vm-503-disk-0_Netflow' [16.00 GiB] inherit


proxmox storage configs

dir: local
path /var/lib/vz
content backup,vztmpl
shared 0

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

lvmthin: VD1
thinpool VD1
vgname VD1
content images,rootdir
nodes pve5

dir: HDD1
path /mnt/pve/HDD1
content iso,vztmpl,backup,snippets,rootdir,images
is_mountpoint 1
nodes pve5


I hope I have provided all the necessary information that you might need to assist with resolving my problem. Also, let me know if I have to share any other details that I may have omitted.

Thank you.
 
Hey,

I assume with lsblk all disks showed up. Could you run vgscan -d --mknodes --cache and post the output? Any try vgchange -ay, after changes like this it can be necessary to activate VGs manually, which is what this command does.

Note: with the [CODE]...[/CODE] tag you can easily format output nicely.
 
Hi Hannes,
Yes all disks showed up with lsblk
The output is as follows;

root@pve5:~# vgscan -d --mknodes --cache
Ignoring vgscan --cache command because lvmetad is no longer used.

I have also executed vgchange -ay command several times but it is still giving the same issue
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!