[SOLVED] Partition disk LVM-thin with Local?

killmasta93

Renowned Member
Aug 13, 2017
959
56
68
30
Hi,
I was wondering if someone else has had the same question, as configuring a hardware RAID, whether its 10 or 5 it creates a Virtual DISK which later on the idea is to partition it into LVM-thin for the VMs and a local for either quick backups or iso storage etc.. As I have another disk for just the OS proxmox, the idea is to partition 1tb disk Hardware Raid to 500gigs LVM-thin and the other 500gigs as a normal directory, the issue im running is that when i create the normal directory i cannot then create the LVM-thin,
I run these steps to create it

Code:
cfdisk /dev/sdb
then I would greate a new---primary---500G
then write and quit

Next format the disk

Code:
mkfs.ext4 /dev/sdb1

Next create folder to mount

Code:
mkdir /backupvm
then mount it

Code:
mount -t ext4 /dev/sdb1 /backupvm

Works great but when i try to create LVM-thin

Code:
vgcreate -s 500G vm_thin /dev/sdb

i get an error message saying its going to format everything

Any ideas?

Thank you
 
Last edited:
> As I have another disk for just the OS proxmox, the idea is to partition 1tb disk Hardware Raid to 500gigs LVM-thin and the other 500gigs as a normal directory

then probably you need to do pvcreate /dev/sdb2 where 2 is the second partition of your disk and then create a volume group using the PV device
see
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_creating_a_volume_group
for examples
 
  • Like
Reactions: killmasta93
Thanks for the reply,
So after reading that link this is what i got first reformatted the disk to GPT
by running
Code:
fdisk /dev/sdb
then option g to create new empty GPT partition table

then again i created the 500 gig partition
Code:
cfdisk /dev/sdb
then I would greate a new---primary---500G
then write and quit

Then run this code which creates the partitions

Code:
sgdisk -N 2 /dev/sdb

the outcome of this
The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
The operation has completed successfully.

But the next part is where i get stuck as it doesn't find the route
I would run this
Code:
 pvcreate --metadatasize 250k -y -ff /dev/sdb2

But the outcome is
Device /dev/sdc2 not found (or ignored by filtering).

Thank you
 
EDIT: Solved that issue with the reboot

So after doing so I ran
Code:
pvcreate --metadatasize 250k -y -ff /dev/sdb2

giving me this outcome which is good
Physical volume "/dev/sdc2" successfully created.

then ran
Code:
vgcreate vmdata /dev/sdb2

giving me this outcome which is also good
Volume group "vmdata" successfully created

But when i try to create the LV
Code:
lvcreate -n vz -V 500G vmdata/data
I get this
Using default stripesize 64.00 KiB.
Pool data not found in Volume group vmdata.

Odd because the group name was vmdata

Thank you
 
EDIT 2:
Yes finally figure it out if anyone else is in the same pickle

Ignore the part
lvcreate -n vz -V 500G vmdata/data

Instead run this command
Code:
lvcreate -L 498G -T -n vmstore vmdata
then on the webgui add the LVM-thin
Only question is LVM-thin or the LVM, was reading up that LVM-thin is a tad bit faster?

Thank you
 
no LVM and LVM-thin have different properties but you cannot say one is *faster*
if you're interested about speed use (entreprise) SSD. There is much much to gain by switching from mechanical to SSD than fiddling with different storage settings.
 
Thanks for the reply, could you enlighten me on the difference of LVM vs LVM-thin?

Thank you
 
Hi! AFAIK the 'Thin' refers to thin provisioning.

From the Wiki ( https://pve.proxmox.com/wiki/LVM2 ) :
" LVM normally allocates blocks when you create a volume. LVM thin pools instead allocates blocks when they are written. This behavior is called thin-provisioning, because volumes can be much larger than physically available space. "

Comparison (from wiki)
lvm.png

Which basically means that you can privision more space than the disk phisically has.
I have a 500GB SSD with more than twenty VMs, all with 50Gb (or more) disks (which is more than 1 TB) but my SSD still has 120GB free because of Thin provisioning.

You would of course have problems if you were to fill those disks.
Free space is also shared among the volume group (for obvious reasons).

Cheers!
Gus
 
  • Like
Reactions: killmasta93
This one did help.
I was getting error sdb1 busy, ignored by filer
so I created sdb2 sdb3 sdb4-with large space

and I mounted.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!