HowTo mount ext4? pveperf benchmarks on different hardware.

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
15,883
1,107
273
pls post your experiences with ext4, include the mount options (cat /proc/mounts), pverperf results and details about the hardware.
 
pls post your experiences with ext4, include the mount options (cat /proc/mounts), pverperf results and details about the hardware.
Hi,
Raid-1 with 2 OCZ Vertex 2 on an Areca arc-1212:
Code:
# Debian 6 "standard-mount" but noatime
root@powerbox:/pve/usr/lib/perl5# mount /dev/sde1 -o defaults,noatime /mnt
root@powerbox:/pve/usr/lib/perl5# /pve/usr/bin/pveperf /mnt
CPU BOGOMIPS:      24082.55
REGEX/SECOND:      1137502
HD SIZE:           110.00 GB (/dev/sde1)
BUFFERED READS:    361.39 MB/sec
AVERAGE SEEK TIME: 0.30 ms
FSYNCS/SECOND:     333.46

root@powerbox:/pve/usr/lib/perl5# grep sde /proc/mounts 
/dev/sde1 /mnt ext4 rw,noatime,user_xattr,acl,barrier=1,data=ordered 0 0

# Flag nodelalloc
root@powerbox:/pve/usr/lib/perl5# mount /dev/sde1 -o defaults,noatime,nodelalloc /mnt
root@powerbox:/pve/usr/lib/perl5# /pve/usr/bin/pveperf /mnt
CPU BOGOMIPS:      24082.55
REGEX/SECOND:      1206731
HD SIZE:           110.00 GB (/dev/sde1)
BUFFERED READS:    355.66 MB/sec
AVERAGE SEEK TIME: 0.29 ms
FSYNCS/SECOND:     2828.53

root@powerbox:/pve/usr/lib/perl5# grep sde /proc/mounts 
/dev/sde1 /mnt ext4 rw,noatime,user_xattr,acl,barrier=1,nodelalloc,data=ordered 0 0
 
Testplatform: HP ProLiant MicroServer N36L, with Adaptec 6805 raid controller

All pveperf tests are done on a ext4 formated, 20 GB partition on a WD1002FBYS (1 GB SATA)

Mounted with just the default settings for ext4:

Code:
root@pve2-hp4:~# cat /proc/mounts | grep sdb1
/dev/sdb1 /mnt/sdb1 ext4 rw,relatime,barrier=1,data=ordered 0 0
root@pve2-hp4:~# pveperf /mnt/sdb1/
CPU BOGOMIPS:      5191.49
REGEX/SECOND:      487698
HD SIZE:           18.34 GB (/dev/sdb1)
BUFFERED READS:    107.77 MB/sec
AVERAGE SEEK TIME: 8.10 ms
FSYNCS/SECOND:     202.03

mounted with nodelloc flag:

Code:
root@pve2-hp4:~# cat /proc/mounts | grep sdb1
/dev/sdb1 /mnt/sdb1 ext4 rw,relatime,barrier=1,nodelalloc,data=ordered 0 0
root@pve2-hp4:~# pveperf /mnt/sdb1/
CPU BOGOMIPS:      5191.49
REGEX/SECOND:      491106
HD SIZE:           18.34 GB (/dev/sdb1)
BUFFERED READS:    107.52 MB/sec
AVERAGE SEEK TIME: 8.14 ms
FSYNCS/SECOND:     1102.18

Summary: with nodelloc flag I got the expected fsyncs/sec (similar to the ext3 results).
 
Testplatform: HP ProLiant MicroServer N36L, no raid controller

All pveperf tests are done on a ext4 formated, 20 GB partition on a WD1002FBYS (1 GB SATA)

Mounted with just the default settings for ext4:
Code:
root@pve2-hp1:~# cat /proc/mounts | grep sdb1
/dev/sdb1 /mnt/sdb1 ext4 rw,relatime,barrier=1,data=ordered 0 0
root@pve2-hp1:~# pveperf /mnt/sdb1/
CPU BOGOMIPS:      5191.50
REGEX/SECOND:      491593
HD SIZE:           18.34 GB (/dev/sdb1)
BUFFERED READS:    107.91 MB/sec
AVERAGE SEEK TIME: 6.84 ms
FSYNCS/SECOND:     58.61

mounted with nodelloc flag:
Code:
root@pve2-hp1:~# cat /proc/mounts | grep sdb1
/dev/sdb1 /mnt/sdb1 ext4 rw,relatime,barrier=1,nodelalloc,data=ordered 0 0
root@pve2-hp1:~# pveperf /mnt/sdb1/
CPU BOGOMIPS:      5191.50
REGEX/SECOND:      483994
HD SIZE:           18.34 GB (/dev/sdb1)
BUFFERED READS:    108.26 MB/sec
AVERAGE SEEK TIME: 6.78 ms
FSYNCS/SECOND:     37.32

Summary: the flag nodelalloc does not improve fsync/sec for single disks, this ext4 partition is quite slow compared to a ext3 formatted disk.

As a reference, the same partition formated and mounted with ext3:
Code:
root@pve2-hp1:~# cat /proc/mounts | grep sdb1
/dev/sdb1 /mnt/sdb1 ext3 rw,relatime,errors=continue,barrier=0,data=ordered 0 0
root@pve2-hp1:~# pveperf /mnt/sdb1/
CPU BOGOMIPS:      5191.50
REGEX/SECOND:      482971
HD SIZE:           18.34 GB (/dev/sdb1)
BUFFERED READS:    108.02 MB/sec
AVERAGE SEEK TIME: 7.09 ms
FSYNCS/SECOND:     543.20
 
...
Summary: the flag nodelalloc does not improve fsync/sec for single disks, this ext4 partition is quite slow compared to a ext3 formatted disk.
...
Hi,
not in all cases. With an single SSD the effect is not so extreme.

Testmachine: AMD Phenom(tm) II X4 945 Processor - Mainboard ASUS M4A78T-E

Single SSD OCZ Vertex 2
Code:
CPU BOGOMIPS:      24082.43
REGEX/SECOND:      958219
HD SIZE:           110.03 GB (/dev/sdb1)
BUFFERED READS:    186.18 MB/sec
AVERAGE SEEK TIME: 0.17 ms
FSYNCS/SECOND:     2953.01

/dev/sdb1 /mnt ext4 rw,noatime,barrier=1,data=ordered 0 0

# with nodelalloc:
CPU BOGOMIPS:      24082.43
REGEX/SECOND:      994188
HD SIZE:           110.03 GB (/dev/sdb1)
BUFFERED READS:    185.78 MB/sec
AVERAGE SEEK TIME: 0.17 ms
FSYNCS/SECOND:     3352.52

/dev/sdb1 /mnt ext4 rw,noatime,barrier=1,nodelalloc,data=ordered 0 0
but a normal disk looks also bad - in this case an Hitachi 2TB Sata (HDS722020ALA330):
Code:
CPU BOGOMIPS:      24082.43
REGEX/SECOND:      985176
HD SIZE:           1833.78 GB (/dev/sdb1)
BUFFERED READS:    130.92 MB/sec
AVERAGE SEEK TIME: 12.83 ms
FSYNCS/SECOND:     45.86

/dev/sdb1 /mnt ext4 rw,relatime,barrier=1,data=ordered 0 0

# same with nodelalloc and noatime
CPU BOGOMIPS:      24082.43
REGEX/SECOND:      1005079
HD SIZE:           1833.78 GB (/dev/sdb1)
BUFFERED READS:    130.76 MB/sec
AVERAGE SEEK TIME: 12.85 ms
FSYNCS/SECOND:     30.80

/dev/sdb1 /mnt ext4 rw,noatime,barrier=1,nodelalloc,data=ordered 0 0

Udo
 
looks like the results are highly dependent on the cache size. single SSD drives also got big caches instead of single standard drives.!? I assume this explains the smaller difference here.

two issues/questions for me:
1. single drive results shows bad performance in these basic tests so I do not see a reason to go for ext4 for my default Proxmox VE installations. ext3 is well known and works reliable and fast for me. sometimes fsck takes some time, but this acceptable for me.

2. Using raid controller with big cache shows good results, but what happens if the cache is full? I mean if the system is under very high load for a longer period, accessing a lot of files - just think of big OpenVZ servers. does this means the file performance of ext4 is behind ext3? I do not have any real live experience here so I fear changing a winning team.

Udo, what filesystem (ext3 or ext4) would you use in the following situation:

A: for single standard drives?
B: for traditional raid10 with 4 SAS/SATA HDD´s?
C: single SSD?
D: raid controller with SSD?
 
looks like the results are highly dependent on the cache size. single SSD drives also got big caches instead of single standard drives.!? I assume this explains the smaller difference here.

two issues/questions for me:
1. single drive results shows bad performance in these basic tests so I do not see a reason to go for ext4 for my default Proxmox VE installations. ext3 is well known and works reliable and fast for me. sometimes fsck takes some time, but this acceptable for me.

2. Using raid controller with big cache shows good results, but what happens if the cache is full? I mean if the system is under very high load for a longer period, accessing a lot of files - just think of big OpenVZ servers. does this means the file performance of ext4 is behind ext3? I do not have any real live experience here so I fear changing a winning team.

Udo, what filesystem (ext3 or ext4) would you use in the following situation:

A: for single standard drives?
B: for traditional raid10 with 4 SAS/SATA HDD´s?
C: single SSD?
D: raid controller with SSD?
Hi Tom,
i have also no problem with ext3 - but the fsck-time (on big filesystems) is an problem for production-server. If i had trouble with an server (e.g. production stop of the whole company) and the reboot takes a lot of time due the fsck... this is the moment where i don't like my job.

But of course we have for critical servers fast raid-controller with cache - so i prefer for bigger filesystems ext4 (i guess nodelalloc is due the BBU no problem).
This means ext4 for B,C,D and ext3 for single disks - also A.

But i have mostly lvm-storage (only one (in some weeks two) server with OpenVz).

Udo
 
pls post your experiences with ext4, include the mount options (cat /proc/mounts), pverperf results and details about the hardware.
Hi,
with OCZ Vertex 3 MaxIOPS are the FSYNCS not so good with ext4:
Code:
/dev/sdb1 /mnt ext4 rw,relatime,barrier=1,nodelalloc,data=ordered 0 0

# pveperf /mnt
CPU BOGOMIPS:      24079.90
REGEX/SECOND:      972595
HD SIZE:           110.03 GB (/dev/sdb1)
BUFFERED READS:    [B]380.55 MB/sec[/B]
AVERAGE SEEK TIME: 0.05 ms
FSYNCS/SECOND:     [B]505.89[/B]
DNS EXT:           55.57 ms
DNS INT:           1.25 ms

####################################################

/dev/sdb1 /mnt ext4 rw,relatime,barrier=1,data=ordered 0 0

# pveperf /mnt
CPU BOGOMIPS:      24079.90
REGEX/SECOND:      981150
HD SIZE:           110.03 GB (/dev/sdb1)
BUFFERED READS:    [B]380.78 MB/sec[/B]
AVERAGE SEEK TIME: 0.04 ms
FSYNCS/SECOND:     [B]974.59[/B]
DNS EXT:           62.87 ms
DNS INT:           1.25 ms

####################################################
/dev/sdb1 /mnt ext3 rw,relatime,data=ordered 0 0

pveperf /mnt
CPU BOGOMIPS:      24079.90
REGEX/SECOND:      997141
HD SIZE:           110.03 GB (/dev/sdb1)
BUFFERED READS:    [B]381.25 MB/sec[/B]
AVERAGE SEEK TIME: 0.05 ms
FSYNCS/SECOND:     [B]5433.00[/B]
DNS EXT:           65.58 ms
DNS INT:           1.26 ms
 
yes, ext4 looks quite bad here compared with the ext3 results. 5433.00 FSYNCS/SECOND is amazing.

as we cannot see clearly that ext4 is better/faster than ext3 (or the opposite) we plan to introduce a boot parameter on the install prompt to define the file system (default will be ext3 as its generally faster)
 
Hi,

It's a Dell R420 with 6 HD SSD 600Gb (Raid0)

root@px-rech2:~# pveperf
CPU BOGOMIPS: 147190.88
REGEX/SECOND: 1381288
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 1264.47 MB/sec
AVERAGE SEEK TIME: 0.08 ms
FSYNCS/SECOND: 5753.41
DNS EXT: 50.38 ms
DNS INT: 0.65 ms

Regards,
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!