Software and Hardware RAID Performance ext3, ext4, xfs with Proxmox

AndreU

New Member
Apr 23, 2012
11
0
1
Yesterday we tested performance of our Proxmox environment and discovered very low fsync rates. We initially used ext4 which was a major issue on the current kernel. We now use ext3 as this seems the fasted option.

Server
  • Hetzner EX 4s
  • Intel i7-2600 Quad-Core
  • 32 GB DDR3 RAM
  • 2 x 3 TB SATA 6 Gb/s HDD 7200 rpm

Code:
root@venus ~ # pveversion -v
pve-manager: 2.0-59 (pve-manager/2.0/18400f07)
running kernel: 2.6.32-11-pve
proxmox-ve-2.6.32: 2.0-66
pve-kernel-2.6.32-11-pve: 2.6.32-66
lvm2: 2.02.88-2pve2
clvm: 2.02.88-2pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-2
pve-cluster: 1.0-26
qemu-server: 2.0-38
pve-firmware: 1.0-15
libpve-common-perl: 1.0-26
libpve-access-control: 1.0-18
libpve-storage-perl: 2.0-17
vncterm: 1.0-2
vzctl: 3.0.30-2pve2
vzprocps: not correctly installed
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-9
ksm-control-daemon: 1.1-1

Software vs. Hardware RAID 1
We tested quite a few options with various filesystems but always got bad performance on fsync rates tested with pveperf. At the end - as we already ordered the hardware RAID - we discovered a great improvement (40x!) only by changing some mount options:

Software RAID 1
mdadm
2-Port Hardware RAID 1
02:00.0 RAID bus controller:
LSI Logic / Symbios Logic LSI MegaSAS 9260 (rev 05)
no BBU / no write cache
ext3
defaults 0 0
BUFFERED READS: 165.10 MB/sec
AVERAGE SEEK TIME: 6.86 ms
FSYNCS/SECOND: 22.55
BUFFERED READS: 139.71 MB/sec
AVERAGE SEEK TIME: 7.00 ms
FSYNCS/SECOND: 1003.28
ext3
rw,relatime,data=ordered 0 0
BUFFERED READS: 155.52 MB/sec
AVERAGE SEEK TIME: 6.30 ms
FSYNCS/SECOND: 853.28
BUFFERED READS: 130.92 MB/sec
AVERAGE SEEK TIME: 7.02 ms
FSYNCS/SECOND: 1011.99

2-Port Hardware RAID 1

02:00.0 RAID bus controller: LSI Logic / Symbios Logic LSI MegaSAS 9260 (rev 05) - no BBU / no write cache

sysbench --test=fileio --file-fsync-freq=1 --file-num=1 --file-total-size=16384 --file-test-mode=rndwr run

2.6.32-5-amd64 (org. Debian Squeeze)2.6.32-5-amd64 (org. Debian Squeeze)
with Proxmox packages:
dmsetup libdevmapper1.02.1 lvm2
2.6.32-11-pve
with Proxmox packages:
dmsetup libdevmapper1.02.1 lvm2
ext3
defaults
2760.15 Requests/sec
43.127Mb/sec
2854.28 Requests/sec
44.598Mb/sec
3241.04 Requests/sec
50.641Mb/sec
ext3
rw,relatime,data=ordered 0 0
2749.91 Requests/sec
42.967Mb/sec
ext42186.61 Requests/sec
34.166Mb/sec
2189.96 Requests/sec
34.218Mb/sec
895.48 Requests/sec
13.992Mb/sec
XFS1176.80 Requests/sec
18.388Mb/sec
2039.42 Requests/sec
31.866Mb/sec
bfs2317.19 Requests/sec
36.206Mb/sec

Conclusions

  • mount options can improve fsync rates quite a lot!
  • hardware RAID is a bit faster regarding fsync rates, but it may not necessarily worth the money - until you buy a BBU and enable write cache.
  • Use ext3 with Proxmox
  • Proxmox Kernel 2.6.32-11-pve does something terrible with ext4
 
Last edited:
can you add your hardware details? e.g. raid controller, how configured, how much cache? (and settings)

and yes, ext3 is faster and the default for Proxmox VE - but we also see good ext4 performance with hardware raid - depends.
 
Different kernels use different default mount options - that is why you get different results for ext4.
 
Different kernels use different default mount options - that is why you get different results for ext4.

We tried various mount options with ext4 without any improvement. If you give me your optimized options for proxmox I would test them and fill the gap.
 
Which software raid are you using, just the built in kernel version managed with mdadm or something else?
 
Thanks @AndreU for sharing that!

I use the same Server (EX4s) from Hetzner and have also big problems with ext4.
Last week the server crashed during a snapshot backup with ext4 errors. I will reinstall the whole server and use ext3 with your mount options.

Big thanks!
 
Just a note: You have to tweak mount options when you do a "from the scratch" Debian installation. If you use the provided ISO files from the Proxmox team they are already set accordingly.
 
Just a note: You have to tweak mount options when you do a "from the scratch" Debian installation. If you use the provided ISO files from the Proxmox team they are already set accordingly.
Is this something pve-1.x related? A fresh installed pve-2.0 from the ISO has no special mount options except default.
 
Is this something pve-1.x related? A fresh installed pve-2.0 from the ISO has no special mount options except default.
Hm, this is strange. Maybe I am wrong and someone of our team modified the values during our test on this machine. We just wondered why this old server performs so well while our brand new one was lame like a duck :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!