Abysmal IO delay inside containers

inDane

Well-Known Member
Jan 11, 2019
34
1
48
34
Dear Proxmoxers,

i have two boxes, one is running truenas with ZFS RaidZ1 and exporting via NFS.
The other one is running Proxmox 7.2. (pve-host)

If I am running i.e.
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=4G --filename=./testfile.fio

on the pve-host, on the nfs share, everything is fine. the IO delay wont spike much <15%. But if I am using an ubuntu container with a raw disk and use the above command, it will immediately spike to 99% and it will eventually timeout the share and everything is hanging.

I seem to be missing something significant here, can you give me a hint on this?

EDIT: Writing IO seems to be the problem. Reading is just fine.

Best
inDane
 
Last edited:
i can reliably crash my system if i put IO load inside a container that has its image as raw on the nfs share...
 
You can limit BW on a container.
on the network device you mean? How should that affect the underlying NFS?

Client -> SSH -> Container -> fio-command -> IOs towards NFS.

If you are talking about the network device, then it would only affect the "client -> ssh" part.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!