Strange sysbench memory result on lxc container

BJ Quinn

Member
Mar 1, 2018
17
2
23
43
I'm getting wildly different results on the host vs in an lxc container (Centos 8) when running a specific sysbench memory test (--memory-access-mode=rnd). I don't seem to be seeing this with other sysbench tests (CPU / IO / mutex) or even other memory tests (i.e. sequential). Any ideas?

This is PMX 7.0, but it was doing the same thing on the same hardware with PMX 6.4 previously. There's nothing else running on the host or any other containers other than this one I'm using to test with, and it has no other load on it than this test.

sysbench memory run --memory-access-mode=rnd

Host:
Code:
Total operations: 26848812 (2684474.24 per second)

26219.54 MiB transferred (2621.56 MiB/sec)


General statistics:
    total time:                          10.0001s
    total number of events:              26848812

Latency (ms):
         min:                                    0.00
         avg:                                    0.00
         max:                                    0.11
         95th percentile:                        0.00
         sum:                                 7471.45

Threads fairness:
    events (avg/stddev):           26848812.0000/0.00
    execution time (avg/stddev):   7.4715/0.00

Container:
Code:
Total operations: 6994068 (699337.40 per second)

6830.14 MiB transferred (682.95 MiB/sec)


General statistics:
    total time:                          10.0001s
    total number of events:              6994068

Latency (ms):
         min:                                    0.00
         avg:                                    0.00
         max:                                    0.13
         95th percentile:                        0.00
         sum:                                 9187.69

Threads fairness:
    events (avg/stddev):           6994068.0000/0.00
    execution time (avg/stddev):   9.1877/0.00

What really sticks out to me is the 2621MB/s vs 682MB/s.

I've tried messing around with cores vs cpu limit, etc. or even allowing unlimited cpu or limiting to a single core (wondering if it was a NUMA issue, as it's an AMD Epyc processor), to no avail.
 
Hm, memory performance should not be affected at all by containers. Potentially, 'sysbench' is doing a lot of syscalls in this chosen memory access mode (mmap/munmap maybe?), which could be slightly slower due to filtering and restrictions. I don't think this should have any impact on real-world production workloads (unless yours very specifically needs ultra-fast memory related syscalls?), and can probably be ignored. If you want, you can try running the sysbench under 'strace' and checking if/which syscalls are being used.
 
Hm, memory performance should not be affected at all by containers. Potentially, 'sysbench' is doing a lot of syscalls in this chosen memory access mode (mmap/munmap maybe?), which could be slightly slower due to filtering and restrictions. I don't think this should have any impact on real-world production workloads (unless yours very specifically needs ultra-fast memory related syscalls?), and can probably be ignored. If you want, you can try running the sysbench under 'strace' and checking if/which syscalls are being used.

This is just part of my routine list of sysbench benchmarks, I didn't even notice the --memory-access-mode=rnd switch until I started trying to dig into what the issue was. I don't specifically know that my workload will require whatever the problem is here.

I'm planning various workloads for this same template (same hardware, PMX 7, CentOS 8 container, etc.), but one of the primary workloads will be very high performance MySQL servers. Do you know if there's anything about LXC that would make that a bad fit? All the other sysbench benchmarking including some basic MySQL benchmarks seem reasonable except this one, which is what caused me to hesitate.

I've done MySQL on LXC plenty of times and it worked fine, but those were a lighter workload than this will be.
 
I personally don't know of any real-world workloads affeced too heavily by containers, but that doesn't mean there aren't any. I believe MySQL (and other SQL DBs) are widely enough in use that such regressions would be caught be upstream LXC quite quickly, but truthfully the only way to test is to run your very specific workload and checking if you notice any degradation.
 
I have a different questions on this. How did you run sysbench inside LXC/LXD? I am trying to run sysbench inside LXC to measure CPU,MEMORY,I/O performance.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!