Hi, I've had a Fujitsu TX200 to experiment a bit, and got confusing results:
It has:
- Intel Xeon CPU E5520 @ 2.27GHz
- 12GB ram
- RAID controller: LSI Logic MegaRAID SAS 1078 with 512MB cache and BBU (write back)
CPU BOGOMIPS: 36266.75
REGEX/SECOND: 640637
proxmox:~# pveversion -v
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.24-11-pve
proxmox-ve-2.6.24: 1.5-23
pve-kernel-2.6.24-11-pve: 2.6.24-23
pve-kernel-2.6.18-2-pve: 2.6.18-5
qemu-server: 1.1-16
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.4-1
proxmox:~#
With Raid5 and 3xSAS 146GB 15K segate 6gb/s HD I've got:
proxmox:~# pveperf
HD SIZE: 66.93 GB (/dev/mapper/pve-root)
BUFFERED READS: 301.54 MB/sec
AVERAGE SEEK TIME: 4.69 ms
FSYNCS/SECOND: 3322.93
With RAID5 but 3xsata "normal" HD, 500GB:
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 235.00 MB/sec
AVERAGE SEEK TIME: 8.58 ms
FSYNCS/SECOND: 3374.63
And with RAID10, 4xsata "normal" HD, 500GB:
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 245.32 MB/sec
AVERAGE SEEK TIME: 8.71 ms
FSYNCS/SECOND: 3305.09
So my surprise are (considering FSYNCS/SECOND):
a) Sata raid5 is fast as much as SAS, even if access time is double
b) Sata raid10 is not faster than Sata raid5
Is FSYNCS/SECOND good indication of "real life" performance? Is considering write speed as well? How do you test performance in a reliable way?
In addition I'm wondering: what about have a "normal" sata HD for Proxmox system installation and "basic" local storage, and then add RAID storage (SAS or Sata) LVM? Will I save "precious" fast hd space without compromising performances, or Proxmox is better run in fast HD too?
And what about have a SSD driver instead of a RAID controller + "n" fast expensive drivers? Should be cheaper and have much better performances (near 0 access time, high substained transfer rate, etc.). Ok, no raid = more risk, but for some installation I could live with a simple restore from last, daily, backup.
And since I use kernel 2.6.24 (waiting for .32 OpenVZ support), what about convert to ext4? Would manage I/O better in general and stress less SSD if I go for SSD?
Finally, I would love to be able to bboot and install directly 2.6.24 kernel, I had problems with 2.6.18 and hd order (raid was sdb and backup 1TB sata hd sda! Kernel 2.6.24 put them in the correct order, so I had to install with sata 1TB unplugged, upgrade kernel and re-plug sata), or I could imagine have an hardware that boots only with recent kernels.
Thanks a lot in advance for tips and clarification and sorry for the maybe too long post
Marco Menardi
It has:
- Intel Xeon CPU E5520 @ 2.27GHz
- 12GB ram
- RAID controller: LSI Logic MegaRAID SAS 1078 with 512MB cache and BBU (write back)
CPU BOGOMIPS: 36266.75
REGEX/SECOND: 640637
proxmox:~# pveversion -v
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.24-11-pve
proxmox-ve-2.6.24: 1.5-23
pve-kernel-2.6.24-11-pve: 2.6.24-23
pve-kernel-2.6.18-2-pve: 2.6.18-5
qemu-server: 1.1-16
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.4-1
proxmox:~#
With Raid5 and 3xSAS 146GB 15K segate 6gb/s HD I've got:
proxmox:~# pveperf
HD SIZE: 66.93 GB (/dev/mapper/pve-root)
BUFFERED READS: 301.54 MB/sec
AVERAGE SEEK TIME: 4.69 ms
FSYNCS/SECOND: 3322.93
With RAID5 but 3xsata "normal" HD, 500GB:
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 235.00 MB/sec
AVERAGE SEEK TIME: 8.58 ms
FSYNCS/SECOND: 3374.63
And with RAID10, 4xsata "normal" HD, 500GB:
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 245.32 MB/sec
AVERAGE SEEK TIME: 8.71 ms
FSYNCS/SECOND: 3305.09
So my surprise are (considering FSYNCS/SECOND):
a) Sata raid5 is fast as much as SAS, even if access time is double
b) Sata raid10 is not faster than Sata raid5
Is FSYNCS/SECOND good indication of "real life" performance? Is considering write speed as well? How do you test performance in a reliable way?
In addition I'm wondering: what about have a "normal" sata HD for Proxmox system installation and "basic" local storage, and then add RAID storage (SAS or Sata) LVM? Will I save "precious" fast hd space without compromising performances, or Proxmox is better run in fast HD too?
And what about have a SSD driver instead of a RAID controller + "n" fast expensive drivers? Should be cheaper and have much better performances (near 0 access time, high substained transfer rate, etc.). Ok, no raid = more risk, but for some installation I could live with a simple restore from last, daily, backup.
And since I use kernel 2.6.24 (waiting for .32 OpenVZ support), what about convert to ext4? Would manage I/O better in general and stress less SSD if I go for SSD?
Finally, I would love to be able to bboot and install directly 2.6.24 kernel, I had problems with 2.6.18 and hd order (raid was sdb and backup 1TB sata hd sda! Kernel 2.6.24 put them in the correct order, so I had to install with sata 1TB unplugged, upgrade kernel and re-plug sata), or I could imagine have an hardware that boots only with recent kernels.
Thanks a lot in advance for tips and clarification and sorry for the maybe too long post
Marco Menardi