[SOLVED] LVM Volume of VM are missig after VM is stopped on V5.4.3

feldi

New Member
Oct 26, 2010
6
0
1
After Upgrading Proxmox to Version 5.4.3

The Virtual LVM Disk (/dev/vg01/vm-xxxx_disk-1) are only visible if the virtual maschine xxxx is running.
If the virtual maschine xxxx is stopped, all LVM disk for this maschine are gone.

So the LVM disks on a stopped VM can not be accessed for migration to a other system.
 
When you restart the VM on the same node, the disks return ?
- Not visible in the VM hardware configuration when VM stops ?
- Not visible under storage (Local-Storage or whatever you have it named) ?
 
- When you restart the VM on the same node, the disks return ? -> Yes
- Not visible in the VM hardware configuration when VM stops ? -> No, still visible
- Not visible under storage (Local-Storage or whatever you have it named) ? -> Yes

It looks like a problem with the storage mapper?

Here are some additional information:
VM 3410 is running
---------------------------------
>ls -l /dev/vg11_05
total 0
lrwxrwxrwx 1 root root 8 May 5 15:27 vm-3011-disk-1 -> ../dm-49
lrwxrwxrwx 1 root root 8 May 5 15:27 vm-3107-disk-1 -> ../dm-41
lrwxrwxrwx 1 root root 8 May 5 15:27 vm-3201-disk-1 -> ../dm-48
lrwxrwxrwx 1 root root 8 May 5 15:27 vm-3202-disk-1 -> ../dm-47
lrwxrwxrwx 1 root root 8 May 5 15:28 vm-3410-disk-1 -> ../dm-44
lrwxrwxrwx 1 root root 8 May 5 15:28 vm-3410-disk-2 -> ../dm-45

ls -l /dev/dm-4*
brw-rw---- 1 root disk 253, 41 May 5 15:28 /dev/dm-41
brw-rw---- 1 root disk 253, 44 May 10 13:04 /dev/dm-44
brw-rw---- 1 root disk 253, 45 May 10 13:04 /dev/dm-45
brw-rw---- 1 root disk 253, 46 May 10 13:04 /dev/dm-46
brw-rw---- 1 root disk 253, 47 May 10 13:04 /dev/dm-47
brw-rw---- 1 root disk 253, 48 May 10 13:01 /dev/dm-48
brw-rw---- 1 root disk 253, 49 May 10 13:04 /dev/dm-49


VM 3410 is stopping
---------------------------------
May 10 12:07:52 serv1017 pvedaemon[673792]: <root@pam> starting task UPID:serv1017:000A4A96:028103A4:5CD54D78:qmshutdown:3410:root@pam:
May 10 12:07:52 serv1017 pvedaemon[674454]: shutdown VM 3410: UPID:serv1017:000A4A96:028103A4:5CD54D78:qmshutdown:3410:root@pam:
May 10 12:07:55 serv1017 kernel: [420100.543419] vmbr0: port 17(tap3410i0) entered disabled state
May 10 12:07:56 serv1017 qmeventd[2939]: Starting cleanup for 3410
May 10 12:07:56 serv1017 qmeventd[2939]: trying to acquire lock...
May 10 12:07:56 serv1017 qmeventd[2939]: OK
May 10 12:07:56 serv1017 qmeventd[2939]: Finished cleanup for 3410
May 10 12:07:56 serv1017 pvedaemon[673792]: <root@pam> end task UPID:serv1017:000A4A96:028103A4:5CD54D78:qmshutdown:3410:root@pam: OK



VM 3410 is stopped
------------------------------------
>ls -l /dev/vg11_05
total 0
lrwxrwxrwx 1 root root 8 May 5 15:27 vm-3011-disk-1 -> ../dm-49
lrwxrwxrwx 1 root root 8 May 5 15:27 vm-3107-disk-1 -> ../dm-41
lrwxrwxrwx 1 root root 8 May 5 15:27 vm-3201-disk-1 -> ../dm-48
lrwxrwxrwx 1 root root 8 May 5 15:27 vm-3202-disk-1 -> ../dm-47

>ls -l /dev/dm-4*
brw-rw---- 1 root disk 253, 41 May 5 15:28 /dev/dm-41
brw-rw---- 1 root disk 253, 46 May 10 12:54 /dev/dm-46
brw-rw---- 1 root disk 253, 47 May 10 12:58 /dev/dm-47
brw-rw---- 1 root disk 253, 48 May 10 12:55 /dev/dm-48
brw-rw---- 1 root disk 253, 49 May 10 12:58 /dev/dm-49



In the proxmox Server View the disk are stil visible in the configuration
In the proxmox Storage View of vg05 the Disk image are still visible

>lvs | grep 3410
vm-3410-disk-1 vg11_04 -wi------- 550.00g
vm-3410-disk-1 vg11_05 -wi------- 100.00g
vm-3410-disk-2 vg11_05 -wi------- 200.00g


VM 3410 is starting again
---------------------------------------
May 10 13:04:35 serv1017 pvedaemon[673792]: <root@pam> starting task UPID:serv1017:000A60D0:028634F9:5CD55AC3:qmstart:3410:root@pam:
May 10 13:04:35 serv1017 pvedaemon[680144]: start VM 3410: UPID:serv1017:000A60D0:028634F9:5CD55AC3:qmstart:3410:root@pam:
May 10 13:04:39 serv1017 systemd[1]: Started 3410.scope.
May 10 13:04:39 serv1017 systemd-udevd[680215]: Could not generate persistent MAC address for tap3410i0: No such file or directory
May 10 13:04:40 serv1017 kernel: [423504.922768] device tap3410i0 entered promiscuous mode
May 10 13:04:40 serv1017 kernel: [423504.938971] vmbr0: port 17(tap3410i0) entered blocking state
May 10 13:04:40 serv1017 kernel: [423504.938975] vmbr0: port 17(tap3410i0) entered disabled state
May 10 13:04:40 serv1017 kernel: [423504.939193] vmbr0: port 17(tap3410i0) entered blocking state
May 10 13:04:40 serv1017 kernel: [423504.939195] vmbr0: port 17(tap3410i0) entered forwarding state
May 10 13:04:40 serv1017 pvedaemon[673792]: <root@pam> end task UPID:serv1017:000A60D0:028634F9:5CD55AC3:qmstart:3410:root@pam: OK

VM 3410 is running
---------------------------------------
ls -l /dev/vg11_05
total 0
lrwxrwxrwx 1 root root 8 May 5 15:27 vm-3011-disk-1 -> ../dm-49
lrwxrwxrwx 1 root root 8 May 5 15:27 vm-3107-disk-1 -> ../dm-41
lrwxrwxrwx 1 root root 8 May 5 15:27 vm-3201-disk-1 -> ../dm-48
lrwxrwxrwx 1 root root 8 May 5 15:27 vm-3202-disk-1 -> ../dm-47
lrwxrwxrwx 1 root root 8 May 10 13:04 vm-3410-disk-1 -> ../dm-44
lrwxrwxrwx 1 root root 8 May 10 13:04 vm-3410-disk-2 -> ../dm-45
 
* How are you trying to migrate the machines - PVE's stack (should) make sure that the devices get activated for a migration (offline or online)?
* You can create the devicenodes by running `vgchange -ay`
* PVE disactivates the lvs in order to ensure consistency in shared LVM environments

* If you try to migrate with the PVE stack please post the command and outputs you're getting

Thanks!
 
How are you trying to migrate the machines - PVE's stack (should) make sure that the devices get activated for a migration (offline or online)?
-> offline, because this virtual machine is very large, 15 virtual disk, with total 12 TB

We are copy the disk with netcat
Source System:
dd if=/dev/vg11_04/vm-3002-disk-6 bs=150M | pv | nc 10.10.77.20 5555
Target System:
netcat -l -p 5555 | pv | dd of=/dev/zvol/vg01/vm-3002-disk-0 bs=150M

PVE disactivates the lvs in order to ensure consistency in shared LVM environments
-> For your point of view, the system works as designed.

For me is the problem solved!
I use now the command "vgchange -ay" and this is OK for me.

Many thanks
Thomas
 
Glad this works for you!

Please mark the thread as '[SOLVED]' so that others know what to expect!
Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!