Hi ,
I am running Proxmox on the Kimsufi 2G box. Proxmox was installed from ready to use image.
Inside of the Proxmox (running on EXT3 partition), I am running 2 x Debian 8 containers (they have EXT4 partition), they're installed from the OVH image either. One of the CT is running MySQL which suffers from the very poor I/O, maxing at around 10 inserts / sec. (I've tweaked the MySQL config to the limits).
Disk benchmark (on the host):
My laptop with 5400RPM 1TB 2.5" can do better:
Inside of the container I had no clue how to run hdparm. It didn't work against the /dev/loop1.
I've tried to use dd then.
On the host:
On the container (much slower):
Disk spec:
CPU
Motherboard:
It looks like the container I/O is slower by 50% than host. Host "performance" does not seem to be good enough either. I know that this is cheap box, but is it all what it can does? Can I tweak config of the Proxmox/LXC/file system to achieve better results? How can I actually measure / check where the bottleneck is?
EDIT:
Adding output from bonnie++
Container:
Host:
Looks like there is even more than 50% gap between the host and the container I/O performance
I am running Proxmox on the Kimsufi 2G box. Proxmox was installed from ready to use image.
Code:
:~$ pveversion
pve-manager/4.1-2/78c5f4a2 (running kernel: 4.2.6-1-pve)
Inside of the Proxmox (running on EXT3 partition), I am running 2 x Debian 8 containers (they have EXT4 partition), they're installed from the OVH image either. One of the CT is running MySQL which suffers from the very poor I/O, maxing at around 10 inserts / sec. (I've tweaked the MySQL config to the limits).
Disk benchmark (on the host):
Code:
:~$ sudo hdparm -Tt dev/sda
Timing cached reads: 1604 MB in 2.00 seconds = 801.93 MB/sec
Timing buffered disk reads: 396 MB in 3.00 seconds = 131.84 MB/sec
My laptop with 5400RPM 1TB 2.5" can do better:
Code:
/dev/sda3:
Timing cached reads: 15746 MB in 2.00 seconds = 7877.89 MB/sec
Timing buffered disk reads: 734 MB in 3.02 seconds = 243.01 MB/sec
Inside of the container I had no clue how to run hdparm. It didn't work against the /dev/loop1.
Code:
Filesystem Size Used Avail Use% Mounted on
/dev/loop1 529G 983M 501G 1% /
none 103k 0 103k 0% /dev
cgroup 13k 0 13k 0% /sys/fs/cgroup
tmpfs 1.1G 0 1.1G 0% /sys/fs/cgroup/cgmanager
tmpfs 209M 41k 208M 1% /run
tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs 162M 0 162M 0% /run/shm
I've tried to use dd then.
On the host:
Code:
:~$ dd if=/dev/zero of=/tmp/test bs=1M count=1000 oflag=direct
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 8.47112 s, 124 MB/s
On the container (much slower):
Code:
tomek@CT101:~$ dd if=/dev/zero of=/tmp/test bs=1M count=1000 oflag=direct
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 17.167 s, 61.1 MB/s
Disk spec:
ATA device, with non-removable media
Model Number: TOSHIBA DT01ACA050
Serial Number: *
Firmware Revision: MS1OA750
Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0; Revision: ATA8-AST T13 Project D1697 Revision 0b
CPU
model name : Intel(R) Atom(TM) CPU D425 @ 1.80GHz
stepping : 10
microcode : 0x107
cpu MHz : 1800.142
cache size : 512 KB
Motherboard:
D425KT
It looks like the container I/O is slower by 50% than host. Host "performance" does not seem to be good enough either. I know that this is cheap box, but is it all what it can does? Can I tweak config of the Proxmox/LXC/file system to achieve better results? How can I actually measure / check where the bottleneck is?
EDIT:
Adding output from bonnie++
Container:
Code:
:~$ sudo bonnie++ -d /tmp -r 2048 -u tomek
Using uid:1000, gid:1000.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
CT101 4G 95 93 58486 36 9836 6 507 91 10642 2 75.1 3
Latency 254ms 1190ms 2168ms 214ms 850ms 1682ms
Version 1.97 ------Sequential Create------ --------Random Create--------
CT101 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 3162 39 +++++ +++ 6715 37 7907 61 +++++ +++ 9642 49
Latency 20143us 3129us 51907us 21944us 419us 44495us
1.97,1.97,CT101,1,1453749999,4G,,95,93,58486,36,9836,6,507,91,10642,2,75.1,3,16,,,,,3162,39,+++++,+++,6715,37,7907,61,+++++,+++,9642,49,254ms,1190ms,2168ms,214ms,850ms,1682ms,20143us,3129us,51907us,21944us,419us,44495us
Host:
Code:
:/$ sudo bonnie++ -d /tmp -r 2048 -u tomek
Using uid:1000, gid:1000.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
ks370359 4G 107 94 88683 71 22965 11 476 83 56187 10 184.0 8
Latency 185ms 1090ms 2189ms 43814us 550ms 965ms
Version 1.97 ------Sequential Create------ --------Random Create--------
ks370359 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 3638 26 +++++ +++ 4597 24 8715 57 +++++ +++ 11676 56
Latency 81247us 1621us 15458us 15474us 6762us 12156us
1.97,1.97,ks370359,1,1453727364,4G,,107,94,88683,71,22965,11,476,83,56187,10,184.0,8,16,,,,,3638,26,+++++,+++,4597,24,8715,57,+++++,+++,11676,56,185ms,1090ms,2189ms,43814us,550ms,965ms,81247us,1621us,15458us,15474us,6762us,12156us
Looks like there is even more than 50% gap between the host and the container I/O performance
Last edited: