LVM Question

  • Thread starter Thread starter eclipsenz
  • Start date Start date
E

eclipsenz

Guest
Hello. Brand new Proxmox user here - love the software :)

Have a question regarding LVM though.

I understand LVM is the way to go and I have setup a test server at home but I'm really not sure if it's using LVM or not? And if so I can't see anywhere that should enable me to snapshot a machine. From what I can see it is using LVM but can someone confirm for me; here's some output:

Code:
strider:/dev# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
UUID=80688095-8ad2-454c-b5af-d98f3e4ac578 /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
Code:
strider:/dev# df -H
Filesystem             Size   Used  Avail Use% Mounted on
/dev/mapper/pve-root    40G   737M    37G   2% /
tmpfs                  458M      0   458M   0% /lib/init/rw
udev                    11M   639k   9.9M   7% /dev
tmpfs                  458M      0   458M   0% /dev/shm
/dev/mapper/pve-data   110G    15G    95G  14% /var/lib/vz
/dev/sda1              529M    33M   470M   7% /boot

Code:
strider:/dev# sfdisk -l

Disk /dev/sda: 19457 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sda1   *      0+     65-     66-    524288   83  Linux
/dev/sda2         65+  19456-  19392- 155764032   8e  Linux LVM
/dev/sda3          0       -       0          0    0  Empty
/dev/sda4          0       -       0          0    0  Empty

Code:
strider:/dev# vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               148.55 GB
  PE Size               4.00 MB
  Total PE              38028
  Alloc PE / Size       37005 / 144.55 GB
  Free  PE / Size       1023 / 4.00 GB
  VG UUID               aSWW7e-o95L-XSO5-PMlf-JtlY-sGzV-6ZXSdr
Code:
strider:/dev# lvdisplay
  --- Logical volume ---
  LV Name                /dev/pve/swap
  VG Name                pve
  LV UUID                lMmDCK-8QTp-SwsD-67Un-Q88L-yeLp-A0B0xw
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                4.00 GB
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:0

  --- Logical volume ---
  LV Name                /dev/pve/root
  VG Name                pve
  LV UUID                lXlTwr-7JDZ-NoTU-s3sl-q4Rr-mHvO-irU20Z
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                37.25 GB
  Current LE             9536
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:1

  --- Logical volume ---
  LV Name                /dev/pve/data
  VG Name                pve
  LV UUID                p0VThG-XmxN-2weo-Qy7A-MTSe-9lLC-hqck4z
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                103.30 GB
  Current LE             26445
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:2

I went into the Storage menu in Proxmox and went to "Add LVM Group" and called it VM, however it shows up as 97.30% full? Not really sure what's going on here:

mVQqm.png

Any help that anyone could give would be appreciated. I did search around but I didn't really make sense of what I read....complete LVM noob.......

Regards,

Nick
 
You have two Volumes i VG named 'pve'. Those are root and data. If you want to use LVM block device you'd have to shrink data Volume (after fs resize) and than you'd have some free space for new volumes for your VMs.
 
Thanks for your reply meto.

I'm not sure what you mean by LVM block device? I'm just trying to figure out if my VM's are stored on a LVM or not out of the box? And if not how would one go about doing this?
 
Thanks for your reply meto.

I'm not sure what you mean by LVM block device? I'm just trying to figure out if my VM's are stored on a LVM or not out of the box? And if not how would one go about doing this?
They are stored as files, when your data partition is located on LVM - so no, they are not on LVM. If you want them to be stored on LVM you'd have to free some space on LVM as I said earlier.
 
Ahhh. I understand now. So how do I go about shrinking the current LVM? What's the appropriate size to shrink it to?

Regards

Nick
 
I don't remember exact commands.
1) Shrink ext3 partition to suit your needs (f.e. ISO are stored there)
2) Shrink /dev/pve/data Volume
3) That's it

PS.
It's very risky operation and you can damage whole data partition. Do it at your own risk. Anyhow I managed to do it without problems. Be precise with numbers (size) of partitions. You can shrink ext3 partition more than data volume.
 
I'm very confused here.

If LVM's allow high availability by having each virtual machine on its only LVM - why isn't this out of the box functionality. I'm really stuck here and would love some help.

From a fresh install I would like to set things up appropriately so that this is achieveable. I have tried shrinking the existing LVM's except I just ended up corrupting the entire filesystem lol :(
 
I'm very confused here.

If LVM's allow high availability by having each virtual machine on its only LVM - why isn't this out of the box functionality. I'm really stuck here and would love some help.

From a fresh install I would like to set things up appropriately so that this is achieveable. I have tried shrinking the existing LVM's except I just ended up corrupting the entire filesystem lol :(
LVM does not bring you HA. You must think of DRBD.

Like I said did you shrink ext3 partition first? You may try shrinking it to let say 30GB (unmount first) and then shrink Volume to 31GB. That way is safer.
 
What are the benefits of LVM versus the Out of the box installation?
 
It is faster - ther is no another FS layer in between VM and Host. That's pretty much it. It's harder to move though...
 
So if I want to cluster later on how will this affect me? Do you mean LVM makes it more difficult to transfer VM's across machines?
 
So if I want to cluster later on how will this affect me? Do you mean LVM makes it more difficult to transfer VM's across machines?
For cluster it's best to have (in my mind) LVM over DRDB). It's more difficult to move from one server to another since you have to copy the yourself using dd (or export to .raw files). There's some tread on forum about that. I once proposed to include LVM Volume transfer in proxmox.
 
To clarify is HA the same as Live Migration or are they different?
 
HA is a general terms: http://en.wikipedia.org/wiki/High_availability

Live Migration is just a feature where you can move a running VM or container form one physical host to another without downtime (works for KVM guests and containers)
 
Thanks for the info, Tom.

I think I understand now. A High-Availability clustier is pretty much like a network RAID of servers.

So; back to my original thread topic, would an out of the box installation of Proxmox support live migrations?

Regards,
Nick
 
Thanks for the info, Tom.

I think I understand now. A High-Availability clustier is pretty much like a network RAID of servers.

To reach a HA you need a bundle of features and a lot of redundant components.

So; back to my original thread topic, would an out of the box installation of Proxmox support live migrations?

Regards,
Nick

Containers: yes (OpenVZ can live-migrate without the need of shared storage)
(using the 2.6.18er Kernel branch)

KVM: yes, you just need shared storage or DRBD
 
Last edited:
DRBD looks a bit complicated at the moment - what's the best way one would go about setting up shared storage?
 
The easiest is NFS. Live-migration for KVM works. (but you lost the vzdump lvm snapshot mode).

Reliable and secure iSCSI is also not that easy to setup. A bit easier is FC SAN, but much more expensive (but reliable and powerful). There are also others, just depends on your needs.