IOzone CentOS 6 VM to SAN Performance Results over 10GbE

OldSunGuy

New Member
Mar 23, 2012
12
0
1
Results from iozone testing on CentOS VM under Proxmox 2.0-45:
Proxmox kernel version: 2.6.32-10-pve; VM ( CentOS kernel info: 2.6.32)

Memory on VM is 6 GB, using virtio to a 100 GB LUN (RAID 0 over 4 disks) from the SAN for OS and disk space (together)

Note 1: Might want them to be separated for better performance
Note 2: Ran 4 runs of each test - get 2 results for reads/writes and then averaged all 8 numbers

Test line submitted is as follows: iozone -s #G -r 1024 -i 0 -i 1 -w -f <filename>

10 GB File (bigger than local RAM so did not use file sync option -o on test)
Write Ave= 48 MB/s
Read Ave= 183 MB/s

7 GB File (bigger than local RAM so did not use file sync option -o on test)
Write Ave= 41 MB/s
Read Ave= 297 MB/s

1 GB File (File test size smaller than local RAM, so USED file sync option -o on test)
Write Ave= 37 MB/s
Read Ave= 5.33 GB/s

1 GB File (Even though smaller than local RAM, DID NOT use file sync option -o on test)
Write Ave= 1.91 GB/s
Read Ave= 4.92 GB/s

Conclusion and Next Step:
I think that vast majority of file writes will be smaller than RAM so it would be valid to accept the Write Rates without file sync'ing as what would be observed in everyday usage.
Next step would be to run multiple VM's from Proxmox box and somehow come up with a loading test to see what kinds of rates can be realized.

Questions:
1- How do these results rate for this kind of configuration?
2- Is my conclusion valid for VM usage?
3- Does anyone have suggested other performance testing to evaluate how Proxmox will work under this configuration?
 
Hi,
it's looks like you measuring caching. Except 1GB-write.
How is your VM-disk connected on the host? Iscsi? LVM-Storage? If you access an filesystem on this raid from the host, which values do you get?
(for reading is pveperf not bad, write with "dd if=/dev/zero of=bigfile bs=1024k count=8192 conv=fdatasync").

What kind of caching is in use in the VM-config? cache=none or cache=writethrough?

You can drop your cache (read) by "echo 3 > /proc/sys/vm/drop_caches" (host and vm?)

Udo
 
Hi Udo-
The VM is using the iscsi directly on an attached RAID 0 100GB LUN from the Enhance SAN. When I ran the dd tests from host (at proxmox shell prompt ) I get on slightly higher rates: 10GB file with sync'ing - rates ~ 70MB/s. iperf rates were disappointing as well .
Was using Writethrough caching also.
I thank you for the way to turn off read caching too.
My next step is as follows:
Replace Proxmox on server with Win2K8 and run IOmeter with setup file Enhance sent me and see if I can reproduce the rates that they publish on their website as a first step. Thanks for your help again.
 
Hi Udo-
The VM is using the iscsi directly on an attached RAID 0 100GB LUN from the Enhance SAN. When I ran the dd tests from host (at proxmox shell prompt ) I get on slightly higher rates: 10GB file with sync'ing - rates ~ 70MB/s. iperf rates were disappointing as well .
Was using Writethrough caching also.
I thank you for the way to turn off read caching too.
My next step is as follows:
Replace Proxmox on server with Win2K8 and run IOmeter with setup file Enhance sent me and see if I can reproduce the rates that they publish on their website as a first step. Thanks for your help again.
Hi,
70MB/s with an 10GB-connection?? That sound not very fast...
Just doing an test with an iSCSI connected only via 1GB-Nic (normal MTU):
Code:
root@pve2-test1:~# pveperf /mnt/pve/test
CPU BOGOMIPS:      32509.69
REGEX/SECOND:      1237600
HD SIZE:           98.43 GB (/dev/mapper/oa_sata_r10-test)
BUFFERED READS:    96.82 MB/sec
AVERAGE SEEK TIME: 7.69 ms
FSYNCS/SECOND:     776.81
DNS EXT:           136.42 ms
DNS INT:           0.97 ms
root@pve2-test1:~# dd if=/dev/zero of=/mnt/pve/test/bigfile bs=1024k count=8192 conv=fdatasync
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 79.0924 s, 109 MB/s
OK, it's an fast raidcontroller inside the iSCSI-box with some (i think 8) disks in raid-10. But only "normal" Sata-disks.

Do you make the test for checking the virtualisation-performance or for checking the iSCSI-evice?

Udo
 
Hi, you can also tune your

ETHERNET STACK
-----------------
/etc/network/interfaces

put

auto ethX
iface ethX inet static
address X.X.X.X
netmask X.X.X.X
mtu 9000





TCP STACK
-----------
/etc/sysctl.conf

net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack = 0
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 524287
net.core.wmem_default = 524287
net.core.optmem_max = 524287
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_rmem = 4096 524287 16777216
net.ipv4.tcp_wmem = 4096 524287 16777216
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1

ISCSI STACK
-------------
and your

/etc/iscsi/iscsid.conf

# The defaults is No.
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.conn[0].iscsi.MaxRecvDataSegmentLength = 131072
node.session.iscsi.FastAbort = Yes

SCHEDULER
------------
you can try to use deadline scheduler :

edit /etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="quiet evelator=deadline"

PROXMOX
---------
use lvm storage or direct iscsi luns
 
Last edited:
Bad thing about the Enhance Technologies SAN is that it is a locked box - can't change any parameters. Found out from them that their maximum MTU they recognize is 3000. SO we just turned off Jumbo Frames altogether . Poor performance cannot be explained by just Jumbo Frame support. It seems way too big a discrepancy. Seems like we are only getting 1Gb Ethernet bandwidth.

Tests above were to a VM under Proxmox where VM was sitting on its own LUN. We also did tests from Proxmox command line directly, no VM's, and and best rates we got were : Write 104MB/s and Read 509MB/s . ( Had to use much larger file size : 40GB since we have 32GB in straight server.) I will check on those other settings settings to see if we can improve performance by changing ones that are different from above. Thank you both for your help.
 
One more thing.
Win2k8 loaded directly on same server that had Proxmox on it - no OS changes or ethernet tweeks to Win2K8 default installation - just made sure using latest Intel 10Gigabit ethernet driver. Using IOmeter, and their IOmeter config file, we got close to published Enhance-Tech performance rates-

Write rates: 480MB/s
Read rates: 610 MB/s .

from http://www.enhance-tech.com/product...-controller-10g-iscsi-san-storage-system.html

Similar speeds also seen by hd_speed tool as well.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!