Poor disk performances on LXC containers

muko

New Member
May 14, 2021
15
0
1
42
Premise: I do not have a good hardware.

My configuration is:
2xNVME disks configured in Software RAID0 with ZFS and compression=lz4

125 containers are running all with the same characteristics and they are all doing the same job

each containers seems to me that is having around 10MB/s in writing while the host has an average of 200MB/s

why it's so bad?

I already did some test:

root@patreh:~# pvesm status
Name Type Status Total Used Available %
local dir active 2063268864 3664256 2059604608 0.18%
local-zfs zfspool active 3604355344 1544750732 2059604612 42.86%

root@patreh:~# pveperf
CPU BOGOMIPS: 118395.20
REGEX/SECOND: 1350050
HD SIZE: 1970.87 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 2924.62
DNS EXT: 55.66 ms
DNS INT: 103.52 ms (feniva.it)
Pve perf performs the same result on /var/lib/vz


ANy idea?
 
Do you mean ZFS ontop of a software raid0 (like the one build into the mainboard) or a striped mirror directly using ZFS?
What drives do you got? Its recommended not to use consumer SSDs with ZFS because they are just too slow and not durable enough.
If you want to benchmark them you should have a look at fio like it is done here.
 
A raid0 made with zfs. And I'm using enterprise NVME disks.
Performances on the host are quite acceptable, inside the lxc container are very bad...
 
Last edited:
Can you show the configuration of one of the slow containers (from /etc/pve/lxc/*.conf)? Maybe you are not using ZFS storage (subvol) but raw or qcow2 file based storage?
 
Can you show the configuration of one of the slow containers (from /etc/pve/lxc/*.conf)? Maybe you are not using ZFS storage (subvol) but raw or qcow2 file based storage?


Code:
ostype: ubuntu
rootfs: local-zfs:subvol-130-disk-0,mountoptions=noatime,replicate=0,size=25G
swap: 1024
 
Your container configuration looks good for storage. Maybe is has not enough memory (you configuration looks incomplete) and is swapping a lot? Maybe your CPU is a bottleneck with compression on ZFS? You are only getting 5% of disk performance when running a single container? What do you use to measure the disk performance?
 
Your container configuration looks good for storage. Maybe is has not enough memory (you configuration looks incomplete) and is swapping a lot? Maybe your CPU is a bottleneck with compression on ZFS? You are only getting 5% of disk performance when running a single container? What do you use to measure the disk performance?

The test:
Code:
wget -qO- wget.racing/nench.sh | bash; wget -qO- wget.racing/nench.sh | bash

The configuration:
Code:
arch: amd64
cores: 1
cpulimit: 1
hostname: HOST_NAME
memory: 1024
net0: name=eth0,bridge=vmbr1,firewall=1,gw=XXXXXXX,hwaddr=AXXXXXX,ip=XXXXX,type=veth
net1: name=eth1,bridge=vmbr1,hwaddr=XXXXX,ip=XXXXX/24,tag=1,type=veth
ostype: ubuntu
rootfs: local-zfs:subvol-130-disk-0,mountoptions=noatime,replicate=0,size=25G
swap: 1024

Each container uses no more than 300MB of RAM
net1 is not used

IO latency is 0.04% (peak. Only sometimes)

I don't think the CPU (Intel(R) Xeon(R) E-2288G CPU 8c/16t) is a bottleneck...

One other thing: I tried to create swap files inside the container but at the moment of activating the swap space, it says the file cannot be mounted because "it contains holes". Is it necessary to activate the swap file even if proxmox should already have created a swap space?
 
ZFS will always have "holes" and does not support linux swapfiles. IIRC, Proxmox allows a container to use as much of the host swap as it has memory+swap. In your case setting swap: 0 still allows it 1GB of swap because of memory: 1024 (with swap: 1024 it allows 2GB of swap). Because it is just a container, where the processes inside run on the host inself, you don't need additional swap inside the container. Note that the host swap is shared with all containers (and the host itself of course).

If your I/O latency is so low, I would say that the disk is not the bottleneck. I also know that sync writes to swap on ZFS can make everything very slow. Also a slow or very busy CPU can slow things down. If part of the system is making everything slow, I/O latency might not go up because it is never pushed to its limits. Maybe disable swap on the host to test if it is part of the problem? Maybe ZFS is using 50% of your memory and causes other processes to swap? Maybe disable ZFS compression to test if it is part of the problem? Sorry, I'm just guessing and trying to remove some variables that might influence the testing.
 
Well thanks for your support.
I already tried to disable compression but I didn't saw any significant improvement (the write speed decreases without zfs compression).

For what concerning the swap: I was thinking of disabling swap in the configuration files of the containers and you confirmed me that it's feasible. :)

using "free -m" on the host I can see that there's no swap space, but I also know that Proxmox already use 8GB of swap. So... How to disable it or to create a swap partition for proxmox?
 
Ok, but free -m is not showing me any mounted swap partition...
Sorry, I though you asked how to turn off Linux swap. I don't think free shows partitions, so I'm a little confused. You said Proxmox was using 8GB of swap but there is no swap (free -m shows Swap: 0 0 0?
 
Sorry, I though you asked how to turn off Linux swap. I don't think free shows partitions, so I'm a little confused. You said Proxmox was using 8GB of swap but there is no swap (free -m shows Swap: 0 0 0?
Free -m shows me that there's no swap spaces even after "swapon -av"

So I assume I don't have any swap partition...
In another post on this forum I read an admin explaining that proxmox uses by default a 8gb swap. Am I wrong?

How to create a swap partition on zfs?
 
Proxmox no longer creates swap on ZFS because it can cause problems. OK, you have no swap at the moment. Therefore, sync writes to swap cannot be a cause for your performance problems. And compression is also not a bottleneck. Sorry, then I don't know what to check next.
Just to make sure: you get 10MB/s if you run only 1 container? Or do you get 10MB/s per container when you run the job on 125 at the same time? Or do you get 10MB/s in total when running 125 jobs? Running 125 jobs with a 8 core (16 thread) CPU might cause the CPU to become the bottleneck and therefore you will not reach maximum I/O.
 
Nevermindi found the way and activated the swap partition.
Now let see... I'm assuming this was the bottleneck...

What a "wannabe" stupid issue...
I will keep you informed
 
Proxmox no longer creates swap on ZFS because it can cause problems. OK, you have no swap at the moment. Therefore, sync writes to swap cannot be a cause for your performance problems. And compression is also not a bottleneck. Sorry, then I don't know what to check next.
Just to make sure: you get 10MB/s if you run only 1 container? Or do you get 10MB/s per container when you run the job on 125 at the same time? Or do you get 10MB/s in total when running 125 jobs? Running 125 jobs with a 8 core (16 thread) CPU might cause the CPU to become the bottleneck and therefore you will not reach maximum I/O.
I activated a swap partition on zfs using:
Code:
zfs create -V 10G rpool/swap1
mkswap /dev/zvol/rpool/swap1
swapon /dev/zvol/rpool/swap1

Then I settled swap=0 on proxmox container's configuration

The host is now swapping 12GB but... Performances on the containers are now acceptable...
At the moment it seems that this change did the trick (considering the CPU that may be the bottleneck)

I got 10MB/s with all containers using "DD" (which I know is not a real test) and the script I mentioned in a previous message.
Why you think that the missing swap partition was not one of the possible causes?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!