Low IOPS in LXC conainer as read

halt

New Member
Jun 16, 2023
8
0
1
I tested IOPS in LXC container on Debian 12. I don't understand why I got very low IOPS on read operaions ?


read - 7k
write - 81k

Code:
                   "readtest: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32",
                    "fio-3.33",
                    "Starting 1 process",
                    "readtest: Laying out IO file (1 file / 4096MiB)",
                    "",
                    "readtest: (groupid=0, jobs=1): err= 0: pid=3742: Fri Aug  4 05:13:49 2023",
                    "  read: IOPS=7526, BW=29.4MiB/s (30.8MB/s)(4096MiB/139310msec)",
                    "    slat (nsec): min=1949, max=1018.0k, avg=3123.51, stdev=2563.56",
                    "    clat (usec): min=265, max=10327, avg=4247.69, stdev=552.58",
                    "     lat (usec): min=301, max=10333, avg=4250.81, stdev=552.70",
                    "    clat percentiles (usec):",
                    "     |  1.00th=[ 2933],  5.00th=[ 3359], 10.00th=[ 3556], 20.00th=[ 3818],",
                    "     | 30.00th=[ 3982], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4359],",
                    "     | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 4883], 95.00th=[ 5145],",
                    "     | 99.00th=[ 5669], 99.50th=[ 5997], 99.90th=[ 6849], 99.95th=[ 7308],",
                    "     | 99.99th=[ 8291]",
                    "   bw (  KiB/s): min=25896, max=35248, per=100.00%, avg=30117.09, stdev=2034.55, samples=278",
                    "   iops        : min= 6474, max= 8812, avg=7529.27, stdev=508.64, samples=278",
                    "  lat (usec)   : 500=0.01%, 1000=0.01%",
                    "  lat (msec)   : 2=0.01%, 4=31.22%, 10=68.77%, 20=0.01%",
                    "  cpu          : usr=1.69%, sys=3.63%, ctx=902869, majf=0, minf=41",
                    "  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%",
                    "     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%",
                    "     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%",
                    "     issued rwts: total=1048576,0,0,0 short=0,0,0,0 dropped=0,0,0,0",
                    "     latency   : target=0, window=0, percentile=100.00%, depth=32",
                    "",
                    "Run status group 0 (all jobs):",
                    "   READ: bw=29.4MiB/s (30.8MB/s), 29.4MiB/s-29.4MiB/s (30.8MB/s-30.8MB/s), io=4096MiB (4295MB), run=139310-139310msec",
                    "",
                    "Disk stats (read/write):",
                    "  loop0: ios=1047189/32, merge=0/0, ticks=4441444/13, in_queue=4441458, util=99.99%"

Code:
                    "writetest: (groupid=0, jobs=1): err= 0: pid=3773: Fri Aug  4 05:14:04 2023",
                    "  write: IOPS=81.4k, BW=318MiB/s (333MB/s)(4096MiB/12882msec); 0 zone resets",
                    "    slat (usec): min=2, max=1032, avg= 2.88, stdev= 1.89",
                    "    clat (usec): min=7, max=2473.7k, avg=389.80, stdev=13669.57",
                    "     lat (usec): min=11, max=2473.7k, avg=392.67, stdev=13669.57",
                    "    clat percentiles (usec):",
                    "     |  1.00th=[  121],  5.00th=[  141], 10.00th=[  145], 20.00th=[  165],",
                    "     | 30.00th=[  188], 40.00th=[  196], 50.00th=[  206], 60.00th=[  237],",
                    "     | 70.00th=[  334], 80.00th=[  449], 90.00th=[  644], 95.00th=[  701],",
                    "     | 99.00th=[ 1057], 99.50th=[ 1172], 99.90th=[ 3523], 99.95th=[ 7439],",
                    "     | 99.99th=[20317]",
                    "   bw (  KiB/s): min=126528, max=721576, per=100.00%, avg=388326.71, stdev=177543.03, samples=21",
                    "   iops        : min=31632, max=180394, avg=97081.76, stdev=44385.66, samples=21",
                    "  lat (usec)   : 10=0.01%, 20=0.01%, 50=0.01%, 100=0.01%, 250=62.62%",
                    "  lat (usec)   : 500=22.55%, 750=12.25%, 1000=1.43%",
                    "  lat (msec)   : 2=0.98%, 4=0.09%, 10=0.03%, 20=0.03%, 50=0.01%",
                    "  lat (msec)   : >=2000=0.01%",
                    "  cpu          : usr=12.89%, sys=25.79%, ctx=178351, majf=1, minf=15",
                    "  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%",
                    "     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%",
                    "     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%",
                    "     issued rwts: total=0,1048576,0,0 short=0,0,0,0 dropped=0,0,0,0",
                    "     latency   : target=0, window=0, percentile=100.00%, depth=32",
                    "",
                    "Run status group 0 (all jobs):",
                    "  WRITE: bw=318MiB/s (333MB/s), 318MiB/s-318MiB/s (333MB/s-333MB/s), io=4096MiB (4295MB), run=12882-12882msec",
                    "",
                    "Disk stats (read/write):",
                    "  loop0: ios=158/1034479, merge=0/0, ticks=232/401349, in_queue=409276, util=94.14%"


Code:
[readtest]
size=4096M
blocksize=4k
filename=/tmp/fio.file
rw=randread
direct=1
buffered=0
ioengine=libaio
iodepth=32


Code:
[writetest]
size=4096M
blocksize=4k
filename=/tmp/fio.file
rw=randwrite
direct=1
buffered=0
ioengine=libaio
iodepth=32
 
Run your test for longer periods of time, e.g. 1 minute and compare the results then.
results as same (



i run grep in vm (debian 12) and LXC Debian 12 in directory 24Gb, get this result :
disk perfomace in LXC slowly as VM :(

123
LXC2m152m282m42
VM1m161m281m24
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!