Mounting LVM thin directly on host?

Dunuin

Distinguished Member
Jun 30, 2020
14,793
4,630
258
Germany
Hi,

I created a LUKS encrypted LVM thin using this...
Code:
# create LUKS encrypted partition
cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha512 /dev/disk/by-id/ata-INTEL_SSDSC2BA200G4_BTHV636205M3200MGN-part1
# unlock partition
cryptsetup luksOpen /dev/disk/by-id/ata-INTEL_SSDSC2BA200G4_BTHV636205M3200MGN-part1 lukslvm
# create LVM
pvcreate /dev/mapper/lukslvm
vgcreate vgluks /dev/mapper/lukslvm
# create LVM thin
lvcreate -l99%FREE -n lvthin vgluks
lvconvert --type thin-pool vgluks/lvthin
# manually added to PVE as LVM thin storage named "VMpool18" using WebUI
...and running VMs on this storage works fine.
But I only created it to run some benchmarks inside a VM vs directly on the host. And now I have 2 questions:

1.) is it possible to disable caching for LVM too? I ran previous benchmarks on ZFS with "primarycache=metadata" so it would be unfair to compare LVM thin vs ZFS if LVM would use read or write caching.
2.) How can I create a LVM thin blockdevice manually and directly format and mount it on the host? So I can compare benchmarks between fio tests run inside the VM vs fio directly run on the host?

Edit:
Some more info:
Code:
lvdisplay vgluks/lvthin
  --- Logical volume ---
  LV Name                lvthin
  VG Name                vgluks
  LV UUID                4czlTI-O0Tk-OJfA-zkiV-4HHO-HsXz-EFRCEi
  LV Write Access        read/write
  LV Creation host, time Hypervisor, 2021-08-28 20:13:41 +0200
  LV Pool metadata       lvthin_tmeta
  LV Pool data           lvthin_tdata
  LV Status              available
  # open                 3
  LV Size                <184.43 GiB
  Allocated pool data    1.43%
  Allocated metadata     11.05%
  Current LE             47214
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6


vgdisplay vgluks
  --- Volume group ---
  VG Name               vgluks
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  11
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               186.29 GiB
  PE Size               4.00 MiB
  Total PE              47691
  Alloc PE / Size       47262 / <184.62 GiB
  Free  PE / Size       429 / <1.68 GiB
  VG UUID               RGMn5D-N3ng-a4e8-c0P5-1zWQ-LhfI-B8ZVwe
 
  pvdisplay /dev/mapper/lukslvm
  --- Physical volume ---
  PV Name               /dev/mapper/lukslvm
  VG Name               vgluks
  PV Size               186.29 GiB / not usable 1.19 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              47691
  Free PE               429
  Allocated PE          47262
  PV UUID               HkV5DJ-83KS-fByM-XslJ-8iJZ-8USC-wmXATC
 
Last edited:
  • Like
Reactions: Ilias.K
Ok I created a virtual disk by adding a new virtual disk to a VM and detached it.
Then I used...
Code:
mkfs.ext4 -b 4096 /dev/vgluks/vm-129-disk-2
mkdir /mnt/test
mount -t ext4 -o noatime,nodiratime /dev/vgluks/vm-129-disk-2 /mnt/test
...to mount it.

I'm still not sure if LVM thin is somehow caching and if yes, how to disable it.
 
Last edited:
  • Like
Reactions: Ilias.K

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!