Disk Migrations with io_uring

adamb

Famous Member
Mar 1, 2012
1,329
77
113
Getting this trying to move a disk.

1687368500422.png

Moving disks from Ceph to LVM iscsi storage. What I don't understand is all the disks on the LVM iscsi storage that are in place now are using aio=io_uring by default.

So if aio=io_uring is bad for this disk type, then why is it default when creating a VM on said storage? Yet I can't move the disk to it?

1687368598104.png
 
Are you running a cluster and are all nodes properly updated and running the latest software?
What is the output of "pveversion -v" and what are the current setting of the Ceph drive.

In latest 7.4 I cant reproduce your error message, so perhaps the checks were relaxed. The defaults/warning went through a few iterations.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Are you running a cluster and are all nodes properly updated and running the latest software?
What is the output of "pveversion -v" and what are the current setting of the Ceph drive.

In latest 7.4 I cant reproduce your error message, so perhaps the checks were relaxed. The defaults/warning went through a few iterations.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

No atm some are on newer version as we are having performance and live migration issues between kernels.
 
Hi,
some storage dont support well aio_uring , other works fine with it.

The problem is that currently, live storage migration can't change it on the fly.
Yes, without changing it, you risk crashes and hangs when migrating to certain storages: https://git.proxmox.com/?p=qemu-server.git;a=commit;h=8fbae1dc8f9041aaa1a1a021740a06d80e8415ed

There is a plan to change some internals and be able to change the setting during disk migration/mirroring, but it's not implemented yet.
 
Hello,

Has this issue still been experienced or has it been solved with the newer kernels?
The commit mentions LVM and Kernel 6.1 and the current one in Proxmox is 6.8 and seemingly a few new changes will also be in 6.10 that should be coming soon (https://github.com/axboe/liburing/wiki/What's-new-with-io_uring-in-6.10)

I have looked around a little but I have not found any documentation of how/why aio_uring does not play well with LVM so I am not able to find if this still applies or not.

Thanks

Luca
 
Hi,
Hello,

Has this issue still been experienced or has it been solved with the newer kernels?
The commit mentions LVM and Kernel 6.1 and the current one in Proxmox is 6.8 and seemingly a few new changes will also be in 6.10 that should be coming soon (https://github.com/axboe/liburing/wiki/What's-new-with-io_uring-in-6.10)

I have looked around a little but I have not found any documentation of how/why aio_uring does not play well with LVM so I am not able to find if this still applies or not.

Thanks

Luca
The relevant commits are:
https://git.proxmox.com/?p=qemu-server.git;a=commit;h=ec5d198e5b60b5810d8a49570de617dcce72d468
https://git.proxmox.com/?p=qemu-server.git;a=commit;h=78a3ada744b87a92f825265d5b49d7eae55e7084
with the first one linking to a report in the forum.

The issues might be gone in the meantime, but it would need to be re-evaluated. If you want to evaluate it on your setup, you can turn on io_uring for VM disks on LVM storages by explicitly selecting the setting when editing the VM's disk (Advanced settings, Async IO, instead of the Default setting).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!