how to improve performance?

Magneto

Well-Known Member
Jul 30, 2017
133
4
58
45
Hi,

We have a 3 node cluster with CEPH, using the following hardware:
3x Supermicro server with following features per server:
128GB RAM
3x 8TB SATA HDD
2x SSD drives (intel_ssdsc2ba400g4 - 400GB DC S3710)
2x 12 core CPU (Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
Quad port 10Gbe Inter NIC
2x 10GB Cisco switches (to isolate storage network from LAN)


Copying a large file in a Windows guest from one HDD (inside Windows) to another gets about 113MB/s. The SATA HDD's in the server can handle a sustained write of about 260MB/s

What can I do to improve performance?


Code:
root@virt2:~# rados bench -p data 10 seq
hints = 1
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16       154       138   551.887       552   0.0792181   0.0980691
    2      16       317       301   601.895       652   0.0867687    0.102741
    3      16       463       447   595.906       584   0.0532899    0.101398
    4      16       590       574   573.913       508    0.162289    0.107351
Total time run:       4.162648
Total reads made:     590
Read size:            4194304
Object size:          4194304
Bandwidth (MB/sec):   566.947
Average IOPS:         141
Stddev IOPS:          15
Max IOPS:             163
Min IOPS:             127
Average Latency(s):   0.112023
Max latency(s):       0.791784
Min latency(s):       0.0193005




root@virt2:~# rados bench -p data 10 rand
hints = 1
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16       208       192    767.86       768   0.0116213   0.0720702
    2      16       391       375   749.889       732   0.0400655   0.0805542
    3      16       625       609   811.885       936   0.0188367   0.0745302
    4      16       844       828   827.886       876   0.0166921   0.0755718
    5      16      1049      1033   826.288       820   0.0209594   0.0748503
    6      16      1328      1312   874.549      1116  0.00463597   0.0712032
    7      16      1603      1587   906.737      1100   0.0865307   0.0696165
    8      16      1882      1866   932.878      1116   0.0949491   0.0674632
    9      16      2132      2116   940.323      1000    0.012315   0.0667656
   10      16      2423      2407   962.676      1164    0.115053   0.0655125
Total time run:       10.087903
Total reads made:     2424
Read size:            4194304
Object size:          4194304
Bandwidth (MB/sec):   961.151
Average IOPS:         240
Stddev IOPS:          39
Max IOPS:             291
Min IOPS:             183
Average Latency(s):   0.0659339
Max latency(s):       0.700279
Min latency(s):       0.00163058











root@virt2:~# rados bench -p data 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_virt2_28979
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16        61        45   179.987       180   0.0534359     0.29766
    2      16       114        98   195.981       212     0.33039    0.301212
    3      16       174       158   210.646       240    0.228472    0.292437
    4      16       231       215   214.979       228     0.10103     0.28548
    5      16       292       276   220.777       244    0.350318    0.281382
    6      16       350       334   222.643       232    0.113847    0.280608
    7      16       405       389   222.261       220   0.0793358    0.280384
    8      16       467       451   225.475       248    0.473257    0.278699
    9      16       529       513   227.976       248    0.173304    0.275661
   10      16       589       573   229.175       240    0.273704    0.272588
Total time run:         10.245187
Total writes made:      590
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     230.352
Stddev Bandwidth:       21.0016
Max bandwidth (MB/sec): 248
Min bandwidth (MB/sec): 180
Average IOPS:           57
Stddev IOPS:            5
Max IOPS:               62
Min IOPS:               45
Average Latency(s):     0.275942
Stddev Latency(s):      0.121986
Max latency(s):         0.919498
Min latency(s):         0.0534359
 

Attachments

  • proxmox1.png
    proxmox1.png
    322.4 KB · Views: 18
Are you using VirtIO disk or network drivers on the Windows Guests? Recent ones?
What about the QEMU Agent? Is it installed?
The Windows guests run the Virtio driver - will confirm the version shortly.
The disk copy didn't use network, so I am not sure if the VirtIO driver is applicable to the problem?

QEMU Agent? Is that inside Windows?
 
To do that he'll also need to make sure Jumbo Frames are enabled on all hardware. Just enabling it on the storage server won't accomplish much.
Hi,

Ok, so I have enabled Jumbo Frames on the switch and set MTU to 9000 on the storage NIC's, but the file copy speed in Windows didn't improve.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!