ASP+MSSQL is 2x slower than in other environments.

aTan

Renowned Member
Mar 22, 2013
43
4
73
Hi, I have an ASP.NET + MSSQL 2012 application on Windows 2012 R2 VM (LVM storage, virtio drivers). The problem is that this application is about 2x slower than on ESXi VM or on real HW server (similar config) or even on a working notebook. I've tried to use https://pve.proxmox.com/wiki/Performance_Tweaks or disable Balloon service driver, but no luck. Does anybody use high load ASP+MSSQL apps? Do you have any problems?

Code:
pveversion --verbose
proxmox-ve-2.6.32: 3.3-138 (running kernel: 2.6.32-33-pve)
pve-manager: 3.3-2 (running version: 3.3-2/995e687e)
pve-kernel-2.6.32-33-pve: 2.6.32-138
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-35
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8                                                                                                                                                                                                                                                             
vzctl: 4.0-1pve6                                                                                                                                                                                                                                                           
vzprocps: 2.0.11-2                                                                                                                                                                                                                                                         
vzquota: 3.1-2                                                                                                                                                                                                                                                             
pve-qemu-kvm: 2.1-9                                                                                                                                                                                                                                                        
ksm-control-daemon: 1.1-1                                                                                                                                                                                                                                                  
glusterfs-client: 3.5.2-1
 
No running VMs. DRBD is running on a separate disk of the same type as a system disk.

Code:
pveperf 
CPU BOGOMIPS:      119994.48                                                                                                                                                                                                                                               
REGEX/SECOND:      1142641                                                                                                                                                                                                                                                 
HD SIZE:           94.49 GB (/dev/mapper/pve-root)                                                                                                                                                                                                                         
BUFFERED READS:    189.73 MB/sec                                                                                                                                                                                                                                           
AVERAGE SEEK TIME: 5.14 ms                                                                                                                                                                                                                                                 
FSYNCS/SECOND:     2846.49                                                                                                                                                                                                                                                 
DNS EXT:           32.69 ms                                                                                                                                                                                                                                                
DNS INT:           10.34 ms
 
CrystalMark benchmark I run 3 weeks ago on a VM. I also tried to put DB files on a usb flashdisk to simulate slower disk performance on a notebook, but it was still 2x faster than on Proxmox VM. It doesn't look like a slow storage problem.


-----------------------------------------------------------------------
CrystalDiskMark 3.0.3 x64 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]
Sequential Read : 1111.116 MB/s
Sequential Write : 88.271 MB/s
Random Read 512KB : 684.499 MB/s
Random Write 512KB : 77.532 MB/s
Random Read 4KB (QD=1) : 12.662 MB/s [ 3091.3 IOPS]
Random Write 4KB (QD=1) : 4.135 MB/s [ 1009.6 IOPS]
Random Read 4KB (QD=32) : 22.155 MB/s [ 5408.9 IOPS]
Random Write 4KB (QD=32) : 4.538 MB/s [ 1107.8 IOPS]
Test : 1000 MB [C: 42.8% (34.1/79.7 GB)] (x5)
Date : 2014/10/10 16:53:28
OS : Windows Server 2012 R2 Server Standard (full installation) [6.3 Build 9600] (x64)
 
Last edited:
I use virtio drivers for disk controller and network.

Code:
bootdisk: virtio0
cores: 4
cpu: host
ide0: none,media=cdrom
memory: 24768
name: IIS
net0: virtio=9A:50:BF:85:06:43,bridge=vmbr1
net1: e1000=1E:AD:89:E8:F8:82,bridge=vmbr2,tag=100
onboot: 1
ostype: win8
scsihw: virtio-scsi-pci
sockets: 1
virtio0: drbd:vm-111-disk-1,cache=none,size=80G
 
It is already changed to virtio-scsi-pci, with default LSI SCSI controller it was the same.
 
For QD=32 your test does not look so good?

Below mine:
Code:
-----------------------------------------------------------------------
CrystalDiskMark 3.0.3 x64 (C) 2007-2013 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]


           Sequential Read :    97.569 MB/s
          Sequential Write :    46.227 MB/s
         Random Read 512KB :    78.239 MB/s
        Random Write 512KB :    45.656 MB/s
    Random Read 4KB (QD=1) :     6.690 MB/s [  1633.2 IOPS]
   Random Write 4KB (QD=1) :     3.673 MB/s [   896.7 IOPS]
   Random Read 4KB (QD=32) :    36.573 MB/s [  8928.9 IOPS]
  Random Write 4KB (QD=32) :    23.192 MB/s [  5662.2 IOPS]


  Test : 1000 MB [F: 1.0% (58.7/6141.0 MB)] (x5)
  Date : 2014/11/04 19:49:41
    OS : Windows 7 Enterprise SP1 [6.1 Build 7601] (x64)
 
Is your VM on local storage (SATA, HWRAID, SOFTWARE RAID...) or DBRD?

from wiki page; http://pve.proxmox.com/wiki/DRBD
Do not use write cache for any virtual drives on top of DRBD as that can cause out of sync blocks. You need to use 'writethrough' or 'directsync' instead of the default 'none'. Follow the link for more information: http://forum.proxmox.com/threads/18...-sync-long-term-investigation-results?p=93126

virtio0: drbd:vm-111-disk-1,cache=none,size=80G
 
mir:
For ATTO I have to register and wait for some email, HD Tune doesn't test write in free version. It looks like QD32 write looks bad.
snowman66, spirit:

DELL PERC H710 Mini 6Gbps - 03:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] (rev 05)
It is configured as a RAID10 with 4x600GB SAS HDD.
Ad cache setting, I had writethrough before none. It was the same. I haven't tested directsync, because it is the same as writethrough only without read cache on host.
I created a test logical volume and run pveperf on it. FSYNC is almost 3 times lower than on the system disk (which is RAID1 on the same RAID controller).

pveperf /mnt/test
CPU BOGOMIPS: 119994.48
REGEX/SECOND: 1189884
HD SIZE: 0.98 GB (/dev/mapper/drbdvg-test)
BUFFERED READS: 340.30 MB/sec
AVERAGE SEEK TIME: 3.16 ms
FSYNCS/SECOND: 923.75
DNS EXT: 35.39 ms
DNS INT: 10.61 ms

It looks like DRBD is not very good at writes. Servers are connected with two bonded 1Gb network adapters. Is there any way to optimize drbd writes?

Code:
resource r0 {
        protocol C;
        startup {
                 wfc-timeout  0;     # non-zero wfc-timeout can be dangerous  (http://forum.proxmox.com/threads/3465-Is-it-safe-to-use-wfc-timeout-in-DRBD-configuration)
                degr-wfc-timeout 60;
                become-primary-on both;
        }
        net {
                cram-hmac-alg sha1;
                shared-secret "***";
                allow-two-primaries;
                after-sb-0pri discard-zero-changes;
                after-sb-1pri discard-secondary;
                after-sb-2pri disconnect;
                 #data-integrity-alg crc32c;     # has to be enabled only for test and  disabled for production use (check man drbd.conf, section "NOTES ON DATA  INTEGRITY")
        }
        on vms1 {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 192.168.222.1:7788;
                meta-disk internal;  
        }
        on vms2 {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 192.168.222.2:7788;
                meta-disk internal;  
        }
        disk {
                 # no-disk-barrier and no-disk-flushes should be applied only to systems  with non-volatile (battery backed) controller caches.
                # Follow links for more information:
                # http://www.drbd.org/users-guide-8.3/s-throughput-tuning.html#s-tune-disable-barriers
                # http://www.drbd.org/users-guide/s-throughput-tuning.html#s-tune-disable-barriers
                # no-disk-barrier;
                # no-disk-flushes;
        }
}
 
It looks like DRBD is not very good at writes. Servers are connected with two bonded 1Gb network adapters. Is there any way to optimize drbd writes?

yes, this part is related to writes.
(I don't use drbd, so I don't known about stability).


Code:
[COLOR=#000000][FONT=monospace]        [/FONT][/COLOR][COLOR=#000000][FONT=monospace]disk {[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]                 [/FONT][/COLOR][COLOR=#000000][FONT=monospace]# no-disk-barrier and no-disk-flushes should be applied only to systems  with non-volatile (battery backed) controller caches.[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]                [/FONT][/COLOR][COLOR=#000000][FONT=monospace]# Follow links for more information:[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]                [/FONT][/COLOR][COLOR=#000000][FONT=monospace]#[/FONT][/COLOR][COLOR=#000000][FONT=monospace] [/FONT][/COLOR][COLOR=#00008B][FONT=monospace][URL]http://www.drbd.org/users-guide-8.3/s-throughput-tuning.html#s-tune-disable-barriers[/URL][/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]                [/FONT][/COLOR][COLOR=#000000][FONT=monospace]#[/FONT][/COLOR][COLOR=#000000][FONT=monospace] [/FONT][/COLOR][COLOR=#00008B][FONT=monospace][URL]http://www.drbd.org/users-guide/s-throughput-tuning.html#s-tune-disable-barriers[/URL][/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]                [/FONT][/COLOR][COLOR=#000000][FONT=monospace]# no-disk-barrier;[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]                [/FONT][/COLOR][COLOR=#000000][FONT=monospace]# no-disk-flushes;[/FONT][/COLOR]
[COLOR=#000000][FONT=monospace]        [/FONT][/COLOR][COLOR=#000000][FONT=monospace]}
[/FONT][/COLOR]
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!