Kernel update

stuartbh

Active Member
Dec 2, 2019
120
11
38
59
ProxMox Users, Developers, et alia:

I am curious if anyone has a prediction as to when the ProxMox kernel version will get into the 5.19.0 version range? My purpose to asking is that I have an USB bluetooth adapter I plugged into my server (so it can communicate with a bluetooth thermometer) that requires a kernel version of 5.19 or later. Obviously this is not a critical concern but just was wondering if anyone had any insight into the timing of future kernel releases.

Stuart
 

Dunuin,

Thanks for your most brisk and useful reply.

I was not aware of this, but indeed did promptly effectuate the installation of this kernel version (as such, my Bluetooth USB device is now functioning).

One additional question...

Do you know if the 5.19 kernel release fixes the issue with aio having to be set to native in pursuance of avoiding the "io-error" problem?


Stuart
 
Last edited:
  • Like
Reactions: tom
Hi,
Do you know if the 5.19 kernel release fixes the issue with aio having to be set to native in pursuance of avoiding the "io-error" problem?
do you mean the CIFS-related issue with QEMU? Since there is no kernel fix yet, we'll automatically disable io_uring for volumes on CIFS, see here for more information. The workaround is in qemu-server>=7.2-5 (currently on the pvetest repository).
 
Hi,

do you mean the CIFS-related issue with QEMU? Since there is no kernel fix yet, we'll automatically disable io_uring for volumes on CIFS, see here for more information. The workaround is in qemu-server>=7.2-5 (currently on the pvetest repository).

Yes, that is the issue I was referring to, though I do apologize for not characterizing it in a more accurate manner. As suggested in one post I read the simple solution (for now) was to change to using AIO=native instead of AIO=io_uring. This has been the manner with which I dealt with this issue.

It is proper for me to think of "qemu-server" as the git repo within ProxMox for the more general qemu processor emulator as customized for ProxMox? I was a bit confused by that git repo name or is it a git repo for something else within ProxMox? It sounds to me like the final correction will come from patching the kernel.

Stuart
 
Last edited:
Yes, that is the issue I was referring to, though I do apologize for not characterizing it in a more accurate manner. As suggested in one post I read the simple solution (for now) was to change to using AIO=native instead of AIO=io_uring. This has been the manner with which I dealt with this issue.
Yes, that is the recommended workaround which is essentially what the patch will automatically do too.

It is proper for me to think of "qemu-server" as the git repo within ProxMox for the more general qemu processor emulator as customized for ProxMox? I was a bit confused by that git repo name or is it a git repo for something else within ProxMox?
The qemu-server repo is the high-level code for managing VM configurations/migration/replication/backup/etc. The repo for the modified version of QEMU we use is here, and that is doing the actual emulation/managing block level/etc.

It sounds to me like the final correction will come from patching the kernel.
Ideally yes, but the discussion hasn't been very active unfortunately.
 
Fiona, et alia:

My research seems to stand indicative of the fact that io_uring is more efficient and provisions better performance than would otherwise be achieved by using other asynchronous IO methodologies, is that a fair assessment?

Presuming the foregoing is an accurate description of reality, how meaningful (if you know) is that performance gain?

I run one server with ProxMox and then TrueNAS SCALE running thereunder with the SAS drives passed through to TrueNAS (ProxMox runs off an SSD on an SSD to USB converter). That TrueNAS Scale server houses the VMs for my other two ProxMox servers (about 10-15 VMs).

I believe I read today that using NFS (instead of CIFS) would provide proper operation when using "AIO=io_uring", is that correct as well?

I could modify my TrueNAS server to provide my VMs via NFS instead of CIFS, but I wonder how much performance gain would be achieved in so doing, or if I am better to just leave them as CIFS shares and keep the AIO set to native. Any comments on this?


Stuart
 
Last edited:
Fiona, et alia:

My research seems to stand indicative of the fact that io_uring is more efficient and provisions better performance than would otherwise be achieved by using other asynchronous IO methodologies, is that a fair assessment?

Presuming the foregoing is an accurate description of reality, how meaningful (if you know) is that performance gain?
Depends on your setup and yes, in general io_uring is faster, but I'm not sure the difference is as big when it comes to network filesystems. But I'm certainly no authority on the matter ;)

I run one server with ProxMox and then TrueNAS SCALE running thereunder with the SAS drives passed through to TrueNAS (ProxMox runs off an SSD on an SSD to USB converter). That TrueNAS Scale server houses the VMs for my other two ProxMox servers (about 10-15 VMs).

I believe I read today that using NFS (instead of CIFS) would provide proper operation when using "AIO=io_uring", is that correct as well?
Yes, I don't think there are issues with NFS and io_uring currently.

I could modify my TrueNAS server to provide my VMs via NFS instead of CIFS, but I wonder how much performance gain would be achieved in so doing, or if I am better to just leave them as CIFS shares and keep the AIO set to native. Any comments on this?
Again, depends on your setup and best to run some benchmarks (or look if people with similar setups already provided some results somewhere).
 
Fiona,

Thanks for all that detail! I was just looking for an answer like, "oh native is 1/2 the speed or 2x as fast" or whatever. It seems to be working fine for me now, so its not a big deal. Once it gets fixed I'll modify all my VMs.

Stuart
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!