Slow IO performance with LVM and no RAID

I have no idea of the difference between version 1 and 2 of swap partitions. My best guess will be that this have no influence on the speed of LVM partitions. It can easily be changed. Disable swap and regenerate the swap partition using default settings will create a swap partition version 2. Remember to turn swap on again;-)

AFAIK the setting in lvm.conf will only influence new LVM partitions but what is worse seems that this setting is used when creating the PV (psychical volume). This indicates that you will have to recreate your PV before the right alignment is used:-\

If you are suppose to recreate the PV my recommendation would be to start from scratch and then use the defaults. Swap version 2 and alignment of partitions using 1MB which is the default.

That is consistent with my assumptions. My best bet is to create a new system from scratch, with all recommendations from this post. New ones are welcome, of course. ;)

I'll report back when a new system with similar hardware is ready, and this thread could be for reference to anyone.
 
I've made a new system, with ext3 this time, still a single SATA disk, and what a change in the numbers:

Code:
root@servidor03:~# pveperf /var/lib/vz
CPU BOGOMIPS:      54394.16
REGEX/SECOND:      1635928
HD SIZE:           1771.76 GB (/dev/mapper/vg1-lv1)
BUFFERED READS:    195.13 MB/sec
AVERAGE SEEK TIME: 16.32 ms
FSYNCS/SECOND:     1177.53
DNS EXT:           35.34 ms
DNS INT:           6.96 ms

This time it's using Proxmox version 3.1, but partitions are built the same way.
 
I've made a new system, with ext3 this time, still a single SATA disk, and what a change in the numbers:

Code:
root@servidor03:~# pveperf /var/lib/vz
CPU BOGOMIPS:      54394.16
REGEX/SECOND:      [URL="tel:1635928"]1635928[/URL]
HD SIZE:           1771.76 GB (/dev/mapper/vg1-lv1)
BUFFERED READS:    195.13 MB/sec
AVERAGE SEEK TIME: 16.32 ms
FSYNCS/SECOND:     1177.53
DNS EXT:           35.34 ms
DNS INT:           6.96 ms

This time it's using Proxmox version 3.1, but partitions are built the same way.
I think pveperf is useless when measuring fsyncs on ext4 partitions. The numbers simply does not add up. Any other tool I have been using to measure disk performance all show ext4 is superior to ext3.
 
The reason to the big performance difference seen between ext3 in proxmox and ext4 is caused by the fact that default mount option regarding ext3 in the kernel used by proxmox is barrier=0 while the default mount option used by the kernel when dealing with ext4 is barrier=1. Test I have made shows than fsync/sec measured with pveperf shows an improvement by a factor 40 when barrier=0 is used as mount option so using default mount options comparing ext3 and ext4 in proxmox is totally misleading. Adding barrier=0 to the mount options with ext4 will show 3-5% better fsync/sec with ext4 compared to ext3.

Since auto_da_alloc is enabled by default using barrier=0 with ext4 gives exactly the same kind of security as using ext3 with barrier=0 then using barrier=1 on ext4 is not the choice for proxmox if performance with ext4 should not suffer unneeded.
 
Last edited:
I think pveperf is useless when measuring fsyncs on ext4 partitions. The numbers simply does not add up. Any other tool I have been using to measure disk performance all show ext4 is superior to ext3.

Sorry, but pveperf measure FSYNC/SECOND. Other tools measuring the same value should return the same results. So what tool reports other values for FSYNC/SECOND?
 
Adding barrier=0 to the mount options with ext4 will show 3-5% better fsync/sec with ext4 compared to ext3.

Unfortunately, this is not always the case - we have seen large differences here depending on underlying hardware. Also, running ext4 with barrier=0 is considered unsafe (ext3 is known to work without problems).
 
I did not state pveperf is useless in general, what I was stating is that using pveperf on the current proxmox kernel to decide whether to use ext3 and ext4 is useless since the default mount options greatly favor performance for ext3.

Code:
"[COLOR=#000000]When comparing versus ext3,[/COLOR]    note that ext4 enables write barriers by default, while ext3 does
    not enable write barriers by default.  So it is useful to use
    explicitly specify whether barriers are enabled or not when via the
    '-o barriers=[0|1]' mount option for both ext3 and ext4 filesystems [COLOR=#000000]    for a fair comparison"

[/COLOR][URL]https://www.kernel.org/doc/Documentation/filesystems/ext4.txt[/URL][COLOR=#000000]
[/COLOR]
 
Unfortunately, this is not always the case - we have seen large differences here depending on underlying hardware. Also, running ext4 with barrier=0 is considered unsafe (ext3 is known to work without problems).
Using ext4 with barrier=0 is as safe as using ext3 with barrier=0 due to the new default mount option:

Code:
auto_da_alloc(*)	Many broken applications don't use fsync() when 
noauto_da_alloc		replacing existing files via patterns such as
			fd = open("foo.new")/write(fd,..)/close(fd)/
			rename("foo.new", "foo"), or worse yet,
			fd = open("foo", O_TRUNC)/write(fd,..)/close(fd).
			If auto_da_alloc is enabled, ext4 will detect
			the replace-via-rename and replace-via-truncate
			patterns and force that any delayed allocation
			blocks are allocated such that at the next
			journal commit, in the default data=ordered
			mode, the data blocks of the new file are forced
			to disk before the rename() operation is
			committed.  This provides roughly the same level
			of guarantees as ext3, and avoids the
			"zero-length" problem that can happen when a
			system crashes before the delayed allocation
			blocks are forced to disk.
 
Why is barrier=1 the default then?
It is the other way round. It is a bug by Redhat that ext3's default mount option is barrier=0 for ext3. This is strictly against recommendations from Ted Ts'o (http://lwn.net/Articles/283161/) since using barrier=0 for ext3 is as insecure as using it for ext4. So if you are concerned with security you should change ext3 to use barrier=1 as well and settle with the horrible performance. If you do not want to change default mount option for ext3 you should do the same for ext4. Sticking to the current situation with barrier=0 for ext3 and barrier=1 for ext4 and then claim ext3 is superior and thereby the recommend way is obtuse.
 
So you will do that change for ext4 as well?

No. Ext4 always had barrier=1, and seems anybody recommends running it that way.

But you can simply change the mount options, that is completely up to you.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!