Errors moving VM Disks from NAS to local storage on newly installed v7 proxmox hosts

Tau

Member
Sep 24, 2020
23
2
8
Greetings fellow proxmox users,

I have a question. I recently have started migrating my VMs from 2 proxmox v6.4-8 hosts to 2 proxmox v7.4-3 hosts. all hosts are part of the same cluster, have extra local storage defined as LVM. I have two NAS nodes added to the cluster aswell. I started the process of moving my VMs from v6 hosts to v7 hosts by first moving the disks to one of my NAS nodes, then migrating the VMs to the v7 host, and after that, as a final step, I try to move the VM Disk to the local storage on the v7 host and then get this error:

TASK ERROR: storage migration failed: target storage is known to cause issues with aio=io_uring (used by current drive)

I have by now realised that if I use the webGui on one of the v7 hosts I can edit a setting Acync IO to native, turn my VM off and back on and the disk can then be moved to the local storage on the v7 host.

1683188488946.png

My questions:
- If I change this setting to native, would this be detrimental to the performance of my NAS when I run my Disks there?
- I would also like to know if there is any settings I can change or manipulate to make the local storage on the v7 hosts accept VM Disks running the io_uring setting.
- Lastly, I havent yet found got online resources that explain what it is I am even trying to configure here, if someone has some nice resource to learn a bit more About Async IO, io_uring native and threads that would be very helpful.

Thanks in advance!
 
I have seen this forumpost, it was very helpful. I do not understand still how to configure my local storage in such a way that it accepts VM Disks with the io_uring setting? they are SSD Disks set in a hardware raid-10, configured as LVM. whenever i move a disk to that storage from my NAS I get the error: TASK ERROR: storage migration failed: target storage is known to cause issues with aio=io_uring (used by current drive)
Seems like my target storage cannot handle io_uring but I fail to understand why.

Edit: Could the fact that I enabled caching on my hardware RAID10 setup be the reason for io_uring not to be accepted?
 
Last edited:
As you have seen in the study, in the Finding section io_uring and native offer similar performance. If you are hell bent on forcing io_uring, then you need to dig a bit deeper.

The first step is to find where that warning comes from:
Code:
grep -R "target storage is known to cause issues with" /usr/share/perl5/PVE/
/usr/share/perl5/PVE/QemuServer.pm:    die "target storage is known to cause issues with aio=io_uring (used by current drive)\n"

Looking at that file we find:
Code:
# Check for bug #4525: drive-mirror will open the target drive with the same aio setting as the
# source, but some storages have problems with io_uring, sometimes even leading to crashes.

Now we can open the actual bug to read some history behind it:
https://bugzilla.proxmox.com/show_bug.cgi?id=4525

Further analyzing the code we can see that if storage is not in "storage_allows_io_uring_default" then the error will be produced.
Lets look at what that means:

Code:
grep -R storage_allows_io_uring_default /usr/share/perl5/PVE/
/usr/share/perl5/PVE/QemuServer.pm:my sub storage_allows_io_uring_default {

Code:
    # io_uring with cache mode writeback or writethrough on LVM will hang, without cache only
    # sometimes, just plain disable...
    return if $scfg && $scfg->{type} eq 'lvm';



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Tau
bbgeek17, thanks for the eye opener. I would have never figured that part out myself! Grepping that file gives a lot of insight.
My VMs disk runs default (no cache) under the Edit: Hard Disk

If I read the article on blockbridge about performance correctly the io=native can still have IO blocks but io_uring is guaranteed not to block. Since I am just a beginner in all this is it wrong to assume that IOs blocking can cause a major disaster but disks slightly under performing less so, especially when the VM's disk are run on remote storage?

I guess whole studies are made around these subjects :)
 
Last edited:
You can read more on IO blocking here : https://forum.proxmox.com/threads/p...ve-io_uring-and-iothreads.116755/#post-552092

There is nothing disastrous in IO blocking, its how things have been for years. Of course it affects latency to a degree. When disk interfaces were slow - IO blocking was masked, when they got fast (NVMe) it came for to the forefront and developers started looking at solving that.
I did not get an impression that you are running a critical high performance infrastructure, so I am not sure why you would want to circumvent a barrier that PVE developers put in place to prevent system crash, which would be disastrous for many.

My VMs disk runs default (no cache) under the Edit: Hard D
LVM will hang, without cache only sometimes


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Tau
I did not get an impression that you are running a critical high performance infrastructure, so I am not sure why you would want to circumvent a barrier that PVE developers put in place to prevent system crash, which would be disastrous for many.

That's correct, we mainly run DNS, DHCP, etc and some monitoring servers, which is critical to our customers but certainly nothing high performance :-)

What is that barrier you are referring to that the developers put in place? that I am trying to circumvent? You mean I should not enable caching right?
 
the barrier is PVE not allowing io_uring for combinations where there are known issues.
 
  • Like
Reactions: Tau
the barrier is PVE not allowing io_uring for combinations where there are known issues.

Thanks for your answer :) I will have to set all my VMs to io native.

Have these problems been recent? because all my VMs running on proxmox v6.4-8 hosts have the default io_uring setting enabled, its only when I started moving to proxmox v7.4-3 hosts that I have encountered this error.

I was hoping I could change some setting on my LVM storage to allow io_uring to work properly.
 
Last edited:
the problem is not new, but the check is ;)
 
  • Like
Reactions: Tau
Excellent, thanks for clearing that up for me :)

As per bbgeek17's articles and recommendations I will then set my Disk controller to SCSI Single to enable IO Thread, and then Async IO To native.

Very informative stuff this.
 
  • Like
Reactions: bbgeek17