SSD low speed in lxc

wir_wolf

Active Member
Dec 14, 2016
1
0
41
34
I have this server https://www.hetzner.de/hosting/produkte_rootserver/ex51ssd
SSD Disk: Crucial_CT500MX200SSD1
I run io test sysbench
sysbench --test=fileio --file-total-size=2G prepare
sysbench --test=fileio --file-total-size=2G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run
In pve machine result
Code:
sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.


Extra file open flags: 0
128 files, 16Mb each
2Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed:  284700 Read, 189800 Write, 607235 Other = 1081735 Total
Read 4.3442Gb  Written 2.8961Gb  Total transferred 7.2403Gb  (24.712Mb/sec)
1581.56 Requests/sec executed

Test execution summary:
    total time:                          300.0198s
    total number of events:              474500
    total time taken by event execution: 1.7923
    per-request statistics:
         min:                                  0.00ms
         avg:                                  0.00ms
         max:                                  0.59ms
         approx.  95 percentile:               0.01ms

Threads fairness:
    events (avg/stddev):           474500.0000/0.00
    execution time (avg/stddev):   1.7923/0.00
in lxc container
Code:
sysbench 0.5:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Random number generator seed is 0 and will be ignored


Extra file open flags: 0
128 files, 16Mb each
2Gb total file size
Block size 16Kb
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...

Threads started!

Operations performed:  17880 reads, 11920 writes, 38130 Other = 67930 Total
Read 279.38Mb  Written 186.25Mb  Total transferred 465.62Mb  (1.552Mb/sec)
   99.33 Requests/sec executed

General statistics:
    total time:                          300.0126s
    total number of events:              29800
    total time taken by event execution: 0.3285s
    response time:
         min:                                  0.00ms
         avg:                                  0.01ms
         max:                                  0.25ms
         approx.  95 percentile:               0.04ms

Threads fairness:
    events (avg/stddev):           29800.0000/0.00
    execution time (avg/stddev):   0.3285/0.00
lxc config
Code:
arch: amd64
cpulimit: 4
cpuunits: 2048
hostname: *******.com
memory: 4086
nameserver: 8.8.8.8 8.8.4.4
net0: name=eth0,bridge=vmbr0,hwaddr=00:50:56:00:1E:09,ip=dhcp,ip6=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: local:100/vm-100-disk-1.raw,quota=1,size=32G
searchdomain: *****.com
swap: 4086

Why io speed to slowly?
 
Please mount your LXC container (is raw) and run the test inside there on your host, so you're testing the same, real device stack. In that case you'll also use the same version of your benchmark tool.
 
zfs or lvm? i have lvm. there is difference in speed but not such as yours

from lxc container:
sysbench --test=fileio --file-total-size=2G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run
Code:
Operations performed:  1461480 Read, 974320 Write, 3117770 Other = 5553570 Total
Read 22.3Gb  Written 14.867Gb  Total transferred 37.167Gb  (126.86Mb/sec)
8119.32 Requests/sec executed
Test execution summary:
    total time:                          300.0004s
    total number of events:              2435800
    total time taken by event execution: 28.2117
    per-request statistics:
         min:                                  0.00ms
         avg:                                  0.01ms
         max:                                 12.14ms
         approx.  95 percentile:               0.02ms
Threads fairness:
    events (avg/stddev):           2435800.0000/0.00
    execution time (avg/stddev):   28.2117/0.00

on mounted container volume:
Code:
Operations performed:  1434840 Read, 956560 Write, 3060900 Other = 5452300 Total
Read 21.894Gb  Written 14.596Gb  Total transferred 36.49Gb  (124.55Mb/sec)
7971.32 Requests/sec executed
Test execution summary:
    total time:                          300.0004s
    total number of events:              2391400
    total time taken by event execution: 26.8375
    per-request statistics:
         min:                                  0.00ms
         avg:                                  0.01ms
         max:                                  0.68ms
         approx.  95 percentile:               0.02ms
Threads fairness:
    events (avg/stddev):           2391400.0000/0.00
    execution time (avg/stddev):   26.8375/0.00

on proxmox itself:
Code:
Operations performed:  2262180 Read, 1508120 Write, 4825901 Other = 8596201 Total
Read 34.518Gb  Written 23.012Gb  Total transferred 57.53Gb  (196.37Mb/sec)
12567.65 Requests/sec executed
Test execution summary:
    total time:                          300.0003s
    total number of events:              3770300
    total time taken by event execution: 43.8905
    per-request statistics:
         min:                                  0.00ms
         avg:                                  0.01ms
         max:                                  1.03ms
         approx.  95 percentile:               0.02ms
Threads fairness:
    events (avg/stddev):           3770300.0000/0.00
    execution time (avg/stddev):   43.8905/0.00
 
What do you mean by "on proxmox itself"? The part "on mounted container volume" should also be on your proxmox host itself.

Please provide the directories in which you benchmarked and do a df -hT . in each of them. If I interpret the numbers correctly, you see that you have minor performance decrease from non-lxc to lxc which is normal.
 
I initially do 2 test on proxmox root partition and from the container and there is some performance lost, then i do test with mounted volume and speed loss is consistent.
on proxmox itself i mean
/dev/dm-0 ext4 9.8G 6.8G 2.5G 74% /
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)

mounted container volume
/dev/mapper/pve-vm--105--disk--1 ext4 30G 3.4G 25G 13% /var/lib/lxc/105/rootfs
/dev/mapper/pve-vm--105--disk--1 on /var/lib/lxc/105/rootfs type ext4 (rw,relatime,stripe=128,data=ordered)

stripe=128 maybe this is why there is performance difference.

But i newer saw like wir_wolf ~16x performance drop.