Need some quick help with a LVM screwup

  • Thread starter Thread starter madjeff
  • Start date Start date
M

madjeff

Guest
First off, just a quick thanks to the devs for a great VM platform! I've been testing heavily for the past 2 months and now plan to use it at several of my larger clients.

So, a bit of an LVM newbie here and am quickly trying to get up to speed, but I have a quick issue I have not been able to find a good answer for.

A little background. Built a new ProxMox server for testing consisting of a Dell R610 w/64Gb RAM and 6 drives, 2 Raid1 73Gb drives and 4 Raid5 500Gb drives. Proxmox is on mirrored drives and the Raid5 is designated as VM storage.

So here's the rub. In my initial haste to get this going I created the physical volume (sdb) and volume group (vmstore)

pvcreate /dev/sdb
vgcreate Storage /dev/sdb

I then proceeded to add the volume group in proxmox as Storage and set my test VM's to save drive images to Storage. Stupid, I know. I should have gone on to create logical volumes and then save to them, but didn't really think about it until today when I wanted to mount the folder to make some image copies so I could clone some of the VM's I'm using for scalability testing.

So my question is, how do I clean this up? I want to create a couple of logical volumes on the VMstore group so I can mount them without losing the existing VM images. Anybody got a quick fix and can help an idiot out? =)


Here's the vgdisplay info:

Code:
pmox01:/# vgdisplay
  --- Volume group ---
  VG Name               vmstore
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  18
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                7
  Open LV               5
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.36 TB
  PE Size               4.00 MB
  Total PE              357311
  Alloc PE / Size       66048 / 258.00 GB
  Free  PE / Size       291263 / 1.11 TB
  VG UUID               7VtMnS-BBX1-U5XU-jDBx-oCNp-F5RS-YXqUjn

  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               67.25 GB
  PE Size               4.00 MB
  Total PE              17215
  Alloc PE / Size       16192 / 63.25 GB
  Free  PE / Size       1023 / 4.00 GB
  VG UUID               da3yyr-RE45-Gnua-rfuc-orAu-2VZt-LCRCpf
Here's the pvscan info:

Code:
pmox01:/# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdb
  VG Name               vmstore
  PV Size               1.36 TB / not usable 4.00 MB
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              357311
  Free PE               291263
  Allocated PE          66048
  PV UUID               zZLIos-x4Pg-eZXQ-6oFs-C4DN-CeyF-2e4srk

  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               pve
  PV Size               67.25 GB / not usable 2.41 MB
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              17215
  Free PE               1023
  Allocated PE          16192
  PV UUID               zfCN7V-e5x2-wT1F-347G-veyg-7GU6-p1lafT
Here's the LVdisplay info. As you can see, the VM images are saved as logical volumes on the VG:
Code:
  --- Logical volume ---
  LV Name                /dev/vmstore/vm-101-disk-1
  VG Name                vmstore
  LV UUID                ewA9T8-e4EY-KNmr-cHov-8L9x-JwMb-EE9TIU
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                150.00 GB
  Current LE             38400
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3

  --- Logical volume ---
  LV Name                /dev/vmstore/vm-102-disk-1
  VG Name                vmstore
  LV UUID                rYgTBc-p0PP-vJL6-UTl5-2egT-3fwG-mGPOvR
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                8.00 GB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4
 
Last edited by a moderator:
Didn't mean to offend. =) I thought the approved way to do this was to save the images on the logical volumes so it's easier to manipulate the raw disk images. Am I mistaken?
 
I thought the approved way to do this was to save the images on the logical volumes so it's easier to manipulate the raw disk images. Am I mistaken?

Your images are on logical volumes:

/dev/vmstore/vm-102-disk-1
/dev/vmstore/vm-101-disk-1

So I don't really get what you mean?
 
To clarify, I have a "base" KVM image I want to duplicate several times. I thought I could just duplicate the disk image, create a new VM and point to the copied disk image, change the IP/hostname/mac and go.

So to clarify, do I just use the tools in LVM to clone the logical volume to a new logical volume and point the new VM to the new logical volume?
 
Is this as simple as creating the new VM with the same disk size as the VM to be cloned, shutting down the existing VM to be cloned, and then running the following DD command:

dd if=/dev/vmstore/vm-101-disk-1 of=/dev/vmstore/vm-108-disk-1
 
Udo, thanks for the reply. =)

You are right about the blocksize, it took awhile to copy a 150Gb image. =) But it did work and I have a cloned KVM image running as I type.

That said, does it make more sense to create a logical volume that I can mount, and set that as my disk storage location instead of saving directly to the Volume Group? Or does it really make a difference? Just wondering what "Best Practice" is here. To make things even more interesting, I have a NAS appliance on the way and will be throwing that in the mix as well. Should make for an interesting week or two... =)