Yesterday we tested performance of our Proxmox environment and discovered very low fsync rates. We initially used ext4 which was a major issue on the current kernel. We now use ext3 as this seems the fasted option.
Server
Software vs. Hardware RAID 1
We tested quite a few options with various filesystems but always got bad performance on fsync rates tested with pveperf. At the end - as we already ordered the hardware RAID - we discovered a great improvement (40x!) only by changing some mount options:
2-Port Hardware RAID 1
02:00.0 RAID bus controller: LSI Logic / Symbios Logic LSI MegaSAS 9260 (rev 05) - no BBU / no write cache
sysbench --test=fileio --file-fsync-freq=1 --file-num=1 --file-total-size=16384 --file-test-mode=rndwr run
Conclusions
Server
- Hetzner EX 4s
- Intel i7-2600 Quad-Core
- 32 GB DDR3 RAM
- 2 x 3 TB SATA 6 Gb/s HDD 7200 rpm
Code:
root@venus ~ # pveversion -v
pve-manager: 2.0-59 (pve-manager/2.0/18400f07)
running kernel: 2.6.32-11-pve
proxmox-ve-2.6.32: 2.0-66
pve-kernel-2.6.32-11-pve: 2.6.32-66
lvm2: 2.02.88-2pve2
clvm: 2.02.88-2pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-2
pve-cluster: 1.0-26
qemu-server: 2.0-38
pve-firmware: 1.0-15
libpve-common-perl: 1.0-26
libpve-access-control: 1.0-18
libpve-storage-perl: 2.0-17
vncterm: 1.0-2
vzctl: 3.0.30-2pve2
vzprocps: not correctly installed
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-9
ksm-control-daemon: 1.1-1
Software vs. Hardware RAID 1
We tested quite a few options with various filesystems but always got bad performance on fsync rates tested with pveperf. At the end - as we already ordered the hardware RAID - we discovered a great improvement (40x!) only by changing some mount options:
Software RAID 1 mdadm | 2-Port Hardware RAID 1 02:00.0 RAID bus controller: LSI Logic / Symbios Logic LSI MegaSAS 9260 (rev 05) no BBU / no write cache | |
ext3 defaults 0 0 | BUFFERED READS: 165.10 MB/sec AVERAGE SEEK TIME: 6.86 ms FSYNCS/SECOND: 22.55 | BUFFERED READS: 139.71 MB/sec AVERAGE SEEK TIME: 7.00 ms FSYNCS/SECOND: 1003.28 |
ext3 rw,relatime,data=ordered 0 0 | BUFFERED READS: 155.52 MB/sec AVERAGE SEEK TIME: 6.30 ms FSYNCS/SECOND: 853.28 | BUFFERED READS: 130.92 MB/sec AVERAGE SEEK TIME: 7.02 ms FSYNCS/SECOND: 1011.99 |
2-Port Hardware RAID 1
02:00.0 RAID bus controller: LSI Logic / Symbios Logic LSI MegaSAS 9260 (rev 05) - no BBU / no write cache
sysbench --test=fileio --file-fsync-freq=1 --file-num=1 --file-total-size=16384 --file-test-mode=rndwr run
2.6.32-5-amd64 (org. Debian Squeeze) | 2.6.32-5-amd64 (org. Debian Squeeze) with Proxmox packages: dmsetup libdevmapper1.02.1 lvm2 | 2.6.32-11-pve with Proxmox packages: dmsetup libdevmapper1.02.1 lvm2 | |
ext3 defaults | 2760.15 Requests/sec 43.127Mb/sec | 2854.28 Requests/sec 44.598Mb/sec | 3241.04 Requests/sec 50.641Mb/sec |
ext3 rw,relatime,data=ordered 0 0 | 2749.91 Requests/sec 42.967Mb/sec | ||
ext4 | 2186.61 Requests/sec 34.166Mb/sec | 2189.96 Requests/sec 34.218Mb/sec | 895.48 Requests/sec 13.992Mb/sec |
XFS | 1176.80 Requests/sec 18.388Mb/sec | 2039.42 Requests/sec 31.866Mb/sec | |
bfs | 2317.19 Requests/sec 36.206Mb/sec |
Conclusions
- mount options can improve fsync rates quite a lot!
- hardware RAID is a bit faster regarding fsync rates, but it may not necessarily worth the money - until you buy a BBU and enable write cache.
- Use ext3 with Proxmox
- Proxmox Kernel 2.6.32-11-pve does something terrible with ext4
Last edited: