[SOLVED] Problems with moving disk VM

mitroviqui

New Member
Oct 3, 2022
25
1
3
Hi, I'm getting this error every time I want to move a disk of the VM to another disk. I state that I have all NAS (physical, Synology) in NFS

TASK ERROR: storage migration failed: target storage is known to cause issues with aio=io_uring (used by current drive)

can someone help me? Sorry for the English but I'm translating with Google
 
Change the disk to not use Async IO =io_uring (in the advanced section)
 
Select it in the GUi and click edit
 
Hi all,

This kind of error is annoying...

A default option generating errors is not an acceptable option, at least for us.

Can we hope someone will solve the problem?

If not, I will change on our clusters from io_uring to native.
Don't know performance gain / loss but with our workloads, lose some % is acceptable, an error : not really.

Thank you,

Christophe.
 
There is no shoe that fits all, so unless you change from NFS to something better, you need to change the I/O setting.
Yes, it is my understanding.

In my case it's a move disk from a Ceph storage to an iSCSI one.

Best solution for me : chose an async_io : threads, which seems to live together with any cache policy and any storage type.

Regards,

Christophe.
 
One option to work around this problem could be to set the aio=native parameter directly in the configuration file /etc/pve/qemu-server/vmid.conf, then migrate the disk and remove the aio=native parameter, in this case the machine does not need to be rebooted, tested on versions Proxmox VE 8.1 and 7.4
 
It works!!. First use EDIT disk option on GUI and change "Async IO =io_uring" (in the advanced section) for "Async IO = native" and then move disk from local storage to a zfs over ISCSI lun.

Deeping on the issue, my problem was tried to move disk from local storage --> zfs over ISCSI LUN storage. It failed with message "TASK ERROR: storage migration failed: Could not open /dev/zpool0/vm-107-disk-0"
Diggin on the Proxmox shell CLI saw there is a problem on the path of symbolik link that "MOVE DISK" plugin assing to the moved disk.
Example:
root@pmxserver:~# ls -l /dev/zvol/zpool0/
total 0
lrwxrwxrwx 1 root root 11 Apr 18 12:54 vm-104-disk-0 -> ../../zd320
lrwxrwxrwx 1 root root 10 Dec 13 11:58 vm-106-disk-0 -> ../../zd64 <---- ok path
lrwxrwxrwx 1 root root 8 Apr 19 12:00 vm-107-disk-0 -> ../zd336 <----- not ok path due zfs disk is located on /dev directory

As you can see symbolik link for disk of vm-107 points to /dev/zvol while vm-107 disk (zd336) is located on /dev. The disk was created OK but moving process stops and fails because it can't find the disk.

Conlusion: One or some "MOVE disk" plugins assing an incorrect path to disks and the moving disk process fails because it can't find the file or vm's disk.
Perhaps using the Async advanced option, plugin assign a correct path for vm's disks.
 
Last edited:
I encountered this issue while attempting to migrate a VM from local storage (directory) to Ceph storage, combined with an HA migration. The HA migration failed silently—no error was displayed, and the process never started moving the system.

When I manually initiated a storage migration for the affected VM (without performing a host migration), I received the following error:

Code:
TASK ERROR: storage migration failed: target storage is known to cause issues with aio=io_uring (used by current drive)

In my case, the error was not related to aio=io_uring as indicated, but rather to the cache mode, which was set to writeback.

To avoid shutting down the VM to change the cache mode, I was able to perform the storage migration successfully after modifying the configuration file manually as follows:

From:
Code:
virtio0: <storage_id>:100/vm-100-disk-0.qcow2,cache=writeback,discard=on,iothread=1,size=20G
To:
Code:
virtio0:  <storage_id>:100/vm-100-disk-0.qcow2,discard=on,iothread=1,size=20G

Disclaimer: This approach may be unsafe, but it worked for me and allowed the migration without requiring a VM shutdown. The VM did not encounter any IO issues afterwards.

Hopefully this helps others that encounter this error and cannot shutdown the VM for manually applying the changes.
 
I encountered this issue while attempting to migrate a VM from local storage (directory) to Ceph storage, combined with an HA migration. The HA migration failed silently—no error was displayed, and the process never started moving the system.
HA already expects shared (or replicated) storage, so please do not turn on HA before your VM is on shared storage.
When I manually initiated a storage migration for the affected VM (without performing a host migration), I received the following error:

Code:
TASK ERROR: storage migration failed: target storage is known to cause issues with aio=io_uring (used by current drive)

In my case, the error was not related to aio=io_uring as indicated, but rather to the cache mode, which was set to writeback.
The cache setting does affect whether io_uring is used as a default or not: https://git.proxmox.com/?p=qemu-ser...0205132bd75ee81c14cf92a2c5dc30c;hb=HEAD#l1511
To avoid shutting down the VM to change the cache mode, I was able to perform the storage migration successfully after modifying the configuration file manually as follows:

From:
Code:
virtio0: <storage_id>:100/vm-100-disk-0.qcow2,cache=writeback,discard=on,iothread=1,size=20G
To:
Code:
virtio0:  <storage_id>:100/vm-100-disk-0.qcow2,discard=on,iothread=1,size=20G

Disclaimer: This approach may be unsafe, but it worked for me and allowed the migration without requiring a VM shutdown. The VM did not encounter any IO issues afterwards.
Good disclaimer, it could lead to issues since QEMU migration expects the same parameters for source and target instance.
 
  • Like
Reactions: Bent
HA already expects shared (or replicated) storage, so please do not turn on HA before your VM is on shared storage.

The cache setting does affect whether io_uring is used as a default or not: https://git.proxmox.com/?p=qemu-ser...0205132bd75ee81c14cf92a2c5dc30c;hb=HEAD#l1511

Good disclaimer, it could lead to issues since QEMU migration expects the same parameters for source and target instance.
Hey @fiona,

Thank you for your detailed reply—I really appreciate the insights you provided!

To clarify how I ended up in this non-standard and unsupported situation (as you correctly pointed out):

Due to specific circumstances within the cluster, I needed to ensure that the VM in question was migrated to a specific host. During this process, the storage was manually migrated from Ceph to local storage because the Ceph cluster had to be placed in 'maintenance' for a short period. As expected, the disk settings from Ceph were carried over to the local storage.

However, after the maintenance period ended, migrating the storage back to Ceph was not possible due to the error I shared earlier.

Given that this VM is critical and should ideally remain online, I decided to attempt the workaround I described. While I understand the risks, it allowed me to avoid downtime and successfully complete the migration.

I hope this explanation helps clarify the reasoning behind my actions and provides more context as to why I approached it this way.

Best regards,
Bent
 
  • Like
Reactions: fiona

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!