Low speed hard drive virtual win 2008r2

crossfire2010

Member
Mar 11, 2017
5
0
6
40
Hi!
I have a some problem with hdd spee under the windows 2008r2. Harware for Proxmox cluster:
2 intel servers with intel Xeon v3
Infortrend 1024 drive storage - 24 sas hdd raid6
1 virtual server for cluster quorum

2 servers installed with drive storage by sas adapters, partition connected as lvm and i hav installed win2008r2 on raw hdd and tests hdd with HddTune give results around 90-110 MB/sec, but if i mound as Directory and select hdd QEMU disk image format tests hdd with HddTune give results around 800-900 MB/sec.

Help solve the problem vith LVM
 
I use VirtIO, proxmox version 4.4, last update. I used different controllers - test results are the same. Also was mapped as a thin LVM, speed test showed the results of 300-400MB, but thin-lvm not supports HA
 
With LVM it is possible that the connection of the local sections of the store were carried out for test purposes. That revealed a big difference in speed.
 
Hi!
I have a some problem with hdd spee under the windows 2008r2. Harware for Proxmox cluster:
2 intel servers with intel Xeon v3
Infortrend 1024 drive storage - 24 sas hdd raid6
1 virtual server for cluster quorum

2 servers installed with drive storage by sas adapters, partition connected as lvm and i hav installed win2008r2 on raw hdd and tests hdd with HddTune give results around 90-110 MB/sec, but if i mound as Directory and select hdd QEMU disk image format tests hdd with HddTune give results around 800-900 MB/sec.

Help solve the problem vith LVM
Hi,
is the SAS-raid under heavy load from other VM-Access?? BBU/Cache-settings ok?

Udo
 
Load from other virtual machines there. Now the cluster is tested with cache settings, everything seems normal. Tested with host /dev/sdb1 partition on the disk array /dev/vmdata/vm-100-disk-1 - disk installed system. When testing it seemed strange that the first test section almost always indicates low speed, but later in accordance with the parameters of the data bus SAS 6G.

root@pve:/dev# dd if=/dev/sdb1 of=/dev/null bs=1024k count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.57946 s, 416 MB/s
root@pve:/dev# dd if=/dev/sdb1 of=/dev/null bs=1024k count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.204476 s, 5.3 GB/s
root@pve:/dev# dd if=/dev/sdb1 of=/dev/null bs=1024k count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.179273 s, 6.0 GB/s


And testing LVM partition always shows roughly the same value

root@pve:/dev# dd if=/dev/vmdata/vm-100-disk-1 of=/dev/null bs=1024k count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.5829 s, 416 MB/s
 
Please, don't test with dd, it's not a test. The second and third are cached, so the unbelievable high GB/s throughput was achieved.
 
Okay, then why when using HDDtune in a virtual machine with LVM tests show a value of 100-120, and when mounting a partition to a local folder and using disk QEMU test shows the results of 800-900?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!