LVM Performance worries

yswery

Well-Known Member
May 6, 2018
78
5
48
54
Hi there!

till now we have been using Promox 3.X
Now we are trialing the latest Proxmox 5.1 and we are really disappoint by the disk performance with the LVM system. Just want to know if this is expect or if something is wrong?

System:
* 8 x 2TB SATA drives in raid 0 (Yes just for testing sake dont worry this is not production)
* LSI MegaRaid SAS 2008 Card
* No active VM/CT on machine
* All testing done hostnode by mounting the LVM partitions


On the root disk we get the following:

Code:
root@px5-testing:/# hdparm -Tt  /dev/mapper/pve-root
/dev/mapper/pve-root:
 Timing cached reads:   14196 MB in  2.00 seconds = 7104.48 MB/sec
 Timing buffered disk reads: 2554 MB in  3.00 seconds = 851.18 MB/sec

-----------------------------------------------------------------------------------

root@px5-testing:/# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.17766 s, 493 MB/s



while on a mounted LVM partition:

Code:
root@px5-testing:/mnt-lvm-test# hdparm -Tt /dev/mapper/pve-vm--100--disk--1
/dev/mapper/pve-vm--100--disk--1:
 Timing cached reads:   13848 MB in  2.00 seconds = 6929.65 MB/sec
 Timing buffered disk reads: 1298 MB in  3.00 seconds = 432.49 MB/sec

-----------------------------------------------------------------------------------

root@px5-testing:/mnt-lvm-test# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.82619 s, 157 MB/s

This is the exact same hardware and I would have hopped/expected the same result or very similar at least but as you can see when doing the tests on the LVM partition its much slower than when on the root

Any help would be greatly appreciated!
 
Hi,

your benchmarks are meaningless.
If you like real benchmarks use fio.

But do consider PVE 3.x has no meltdown and spectre fixes, what can slow system calls (io) up to 20%.
Also, you can't compare LVM with thin-LVM.
Thin-LVM is thin allocated so you have to allocate disk space before you can write.
 
  • Like
Reactions: chrone
Also, when you first write LVM-thin needs to allocate blocks. This is time consuming, but only need to be done once. So subsequent tests writing to allocated blocks will be faster.

Besides, you can still use LVM if you do not like the features of LVM-thin.
 
  • Like
Reactions: chrone
Besides, you can still use LVM if you do not like the features of LVM-thin.
Other than saving potential disk space, what else am I missing that LVM-thin offers over LVM? and how would I go about switching to LVM instead of LVM-thin?


But do consider PVE 3.x has no meltdown and spectre fixes, what can slow system calls (io) up to 20%.
Sorry, I didnt mean to comparing PVE 3.X to 5.X, I was just comparing on the same system on the same hardware the two different mounted partitions and their dd/hdparm output.
 
Other than saving potential disk space, what else am I missing that LVM-thin offers over LVM? and how would I go about switching to LVM instead of LVM-thin?

thin provisioning, efficient snapshots and linked clones.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!