How to mount LVM disk of VM

WebHostingNeeds

Renowned Member
Dec 4, 2015
19
0
66
How i mount lvm disk used by VM on host machine, so i can copy files.

I have some network issue in the VM, can't get eth0 working. The VM use LVM disk image.

Code:
root@server1:~# parted /dev/pve/vm-104-disk-1 print
Model: Linux device-mapper (linear) (dm)
Disk /dev/dm-5: 34.4GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End     Size    Type      File system     Flags
 1      1049kB  30.1GB  30.1GB  primary   ext4            boot
 2      30.1GB  34.4GB  4293MB  extended
 5      30.1GB  34.4GB  4293MB  logical   linux-swap(v1)

root@server1:~#

I need to mount the 2nd partition in the LVM as /mnt
 
I find that kpartx is a very useful tool in such cases.

Code:
kpartx -av /dev/dm-5
lsblk
#Look for the correct /dev/ node in the hiearchy under dm-5
#Maybe you will like to mount it read-only, this is safest, to avoid journal recovery:
mount -o ro,noload -t ext4 /dev/|node| /mnt/
#afterwards
kpartx -dv /dev/dm-5
 
lvs on my server are

Code:
root@server1:~# lvs
  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
  LV            VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve  -wi-ao----  81.00g                                                    
  vm-100-disk-1 pve  -wi-ao---- 500.00g                                                    
  vm-100-disk-2 pve  -wi-ao---- 500.00g                                                    
  vm-101-disk-1 pve  -wi-ao----  12.00g                                                    
  vm-103-disk-1 pve  -wi-ao----  32.00g                                                    
  vm-104-disk-1 pve  -wi-------  32.00g                                                    
root@server1:~#

When i try kpartx, i get error

Code:
root@server1:~# kpartx -av /dev/pve/vm-104-disk-1
failed to stat() /dev/pve/vm-104-disk-1
root@server1:~# kpartx -av /dev/mapper/pve-vm-104-disk-1
failed to stat() /dev/mapper/pve-vm-104-disk-1
root@server1:~#

Any idea what i am doing wrong ?
 
It is good that you included the output of lvs. It shows the logical volume as inactive. This would explain why you can't access it. Is this on a cluster storage?

What is the output of lvdisplay pve/vm-104-disk-1 ?

Can you do lvchange -ay pve/vm-104-disk-1 ?
 
It is OVH server, no cluster.

Code:
root@server1:~# lvdisplay
  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
  --- Logical volume ---
  LV Path                /dev/pve/data
  LV Name                data
  VG Name                pve
  LV UUID                PYCEm1-bXYK-eRwE-1Eyv-aG3h-Hl0W-BIRTxe
  LV Write Access        read/write
  LV Creation host, time rescue.ovh.net, 2015-12-03 08:58:45 +0000
  LV Status              available
  # open                 1
  LV Size                81.00 GiB
  Current LE             20736
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0
   
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-1
  LV Name                vm-100-disk-1
  VG Name                pve
  LV UUID                DIGMu2-EJjy-dHv3-pIG1-VVMn-rCzV-U9hydb
  LV Write Access        read/write
  LV Creation host, time server1, 2015-12-03 20:27:18 +0000
  LV Status              available
  # open                 1
  LV Size                500.00 GiB
  Current LE             128000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1
   
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-2
  LV Name                vm-100-disk-2
  VG Name                pve
  LV UUID                2Otx07-mDge-vf1X-pzCt-4f0G-ZfOi-gXQbK9
  LV Write Access        read/write
  LV Creation host, time server1, 2015-12-03 20:57:19 +0000
  LV Status              available
  # open                 1
  LV Size                500.00 GiB
  Current LE             128000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:2
   
  --- Logical volume ---
  LV Path                /dev/pve/vm-101-disk-1
  LV Name                vm-101-disk-1
  VG Name                pve
  LV UUID                NQJCeF-D6ld-17pG-7v1g-ebRN-OKr4-YQXifR
  LV Write Access        read/write
  LV Creation host, time server1, 2015-12-05 09:01:12 +0000
  LV Status              available
  # open                 1
  LV Size                12.00 GiB
  Current LE             3072
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:3
   
  --- Logical volume ---
  LV Path                /dev/pve/vm-103-disk-1
  LV Name                vm-103-disk-1
  VG Name                pve
  LV UUID                F6XUeP-y5Av-7sYW-cQgZ-nZId-BTZ4-7y1dLC
  LV Write Access        read/write
  LV Creation host, time server1, 2015-12-06 18:15:56 +0000
  LV Status              available
  # open                 1
  LV Size                32.00 GiB
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:4
   
  --- Logical volume ---
  LV Path                /dev/pve/vm-104-disk-1
  LV Name                vm-104-disk-1
  VG Name                pve
  LV UUID                idFjoo-0y8p-DhcA-BxQy-9rZQ-8g5c-PCyneb
  LV Write Access        read/write
  LV Creation host, time server1, 2015-12-16 08:30:50 +0000
  LV Status              NOT available
  LV Size                32.00 GiB
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
root@server1:~# lvdisplay /dev/pve/vm-104-disk-1
  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
  --- Logical volume ---
  LV Path                /dev/pve/vm-104-disk-1
  LV Name                vm-104-disk-1
  VG Name                pve
  LV UUID                idFjoo-0y8p-DhcA-BxQy-9rZQ-8g5c-PCyneb
  LV Write Access        read/write
  LV Creation host, time server1, 2015-12-16 08:30:50 +0000
  LV Status              NOT available
  LV Size                32.00 GiB
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
root@server1:~#

I run the command you said,

Code:
root@server1:~# lvchange -ay pve/vm-104-disk-1 
  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
root@server1:~# lvs
  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
  LV            VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve  -wi-ao----  81.00g                                                    
  vm-100-disk-1 pve  -wi-ao---- 500.00g                                                    
  vm-100-disk-2 pve  -wi-ao---- 500.00g                                                    
  vm-101-disk-1 pve  -wi-ao----  12.00g                                                    
  vm-103-disk-1 pve  -wi-ao----  32.00g                                                    
  vm-104-disk-1 pve  -wi-a-----  32.00g                                                    
root@server1:~#

How you find out logical volume as inactive, from lvs result ?

Look like i got it working...

Code:
root@server1:~# kpartx -av /dev/mapper/pve-vm-104-disk-1
failed to stat() /dev/mapper/pve-vm-104-disk-1
root@server1:~# kpartx -av /dev/pve/vm-104-disk-1
add map pve-vm--104--disk--1p1 (252:6): 0 58718208 linear /dev/pve/vm-104-disk-1 2048
add map pve-vm--104--disk--1p2 (252:7): 0 2 linear /dev/pve/vm-104-disk-1 58722302
add map pve-vm--104--disk--1p5 : 0 8384512 linear /dev/pve/vm-104-disk-1 58722304
root@server1:~#
 
Thanks, i got it working

Code:
root@server1:/dev/mapper# ls -la
total 0
drwxr-xr-x  2 root root     240 Dec 19 03:07 .
drwxr-xr-x 21 root root    4680 Dec 19 03:07 ..
crw-------  1 root root 10, 236 Dec  4 03:42 control
lrwxrwxrwx  1 root root       7 Dec 12 21:09 pve-data -> ../dm-0
lrwxrwxrwx  1 root root       7 Dec 12 21:09 pve-vm--100--disk--1 -> ../dm-1
lrwxrwxrwx  1 root root       7 Dec 12 21:09 pve-vm--100--disk--2 -> ../dm-2
lrwxrwxrwx  1 root root       7 Dec 12 21:09 pve-vm--101--disk--1 -> ../dm-3
lrwxrwxrwx  1 root root       7 Dec 12 21:09 pve-vm--103--disk--1 -> ../dm-4
lrwxrwxrwx  1 root root       7 Dec 19 03:05 pve-vm--104--disk--1 -> ../dm-5
lrwxrwxrwx  1 root root       7 Dec 19 03:07 pve-vm--104--disk--1p1 -> ../dm-6
lrwxrwxrwx  1 root root       7 Dec 19 03:07 pve-vm--104--disk--1p2 -> ../dm-7
lrwxrwxrwx  1 root root       7 Dec 19 03:07 pve-vm--104--disk--1p5 -> ../dm-8
root@server1:/dev/mapper# 


root@server1:~# mount /dev/mapper/pve-vm--104--disk--1p1 /mnt
root@server1:~# ls -l /mnt
total 100
drwxr-xr-x  2 root root  4096 Dec 18 09:21 bin
drwxr-xr-x  3 root root  4096 Dec 18 09:21 boot
drwxr-xr-x  3 root root  4096 Dec 16 08:35 dev
drwxr-xr-x 89 root root  4096 Dec 18 09:24 etc
drwxr-xr-x  3 root root  4096 Dec 16 08:48 home
lrwxrwxrwx  1 root root    33 Dec 16 08:36 initrd.img -> boot/initrd.img-3.19.0-25-generic
drwxr-xr-x 21 root root  4096 Dec 16 08:41 lib
drwxr-xr-x  2 root root  4096 Dec 16 08:35 lib64
drwx------  2 root root 16384 Dec 16 08:35 lost+found
drwxr-xr-x  3 root root  4096 Dec 16 08:35 media
drwxr-xr-x  2 root root  4096 Apr 10  2014 mnt
drwxr-xr-x  2 root root  4096 Aug  5 06:11 opt
drwxr-xr-x  2 root root  4096 Apr 10  2014 proc
drwx------  4 root root  4096 Dec 18 09:06 root
drwxr-xr-x  2 root root  4096 Dec 16 08:49 run
drwxr-xr-x  2 root root 12288 Dec 18 09:20 sbin
drwxr-xr-x  2 root root  4096 Aug  5 06:11 srv
drwxr-xr-x  2 root root  4096 Mar 13  2014 sys
drwxrwxrwt  2 root root  4096 Dec 18 19:17 tmp
drwxr-xr-x 10 root root  4096 Dec 16 08:35 usr
drwxr-xr-x 12 root root  4096 Dec 16 08:44 var
lrwxrwxrwx  1 root root    30 Dec 16 08:36 vmlinuz -> boot/vmlinuz-3.19.0-25-generic
root@server1:~#
 
Good :)

The problem was that is was not available at first. You can see this in the lvs output.

First you got:
Code:
vm-104-disk-1 pve  -wi-------  32.00g

After lvchange -ay you got:
Code:
vm-104-disk-1 pve  -wi-a-----  32.00g

If you see the "a" in the fifth field, then it is available. If a LVM does not respond at boot it is unavailable, until you scan the LVM space or manually activate it.

Little heads up. If you only need to retrieve data and it is ext3 or ext4 filesystem, it is safer to do:
Code:
mount -o ro,noload /dev/path-to /mnt/vm
 
  • Like
Reactions: apap

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!