LVM on iSCSI from NFS

Apr 29, 2021
35
6
13
47
Hi.
I'm about to move from NFS to LVM on multipath iSCSI as shared storage in our proxmox cluster.

I understand that I need to change the disk definitions on the VM's to use async IO native instead of the default io_uring, and I need to restart the VM's to do that.

We have about 230 VM's to move, so it's not made in a day - I was wondering if it's safe to change the Async io to native and still run them on NFS until I get about to move the storage?

The NFS shares are of different kinds. Some are truenas with ZFS below, and some are plain old NAS's.

best regards
--
Markus
 
If you want to bypass the GUI: backup everything, power down the VMs and you can try sed on the entries in /etc/pve/qemu-server/*.conf

This is the difference:


scsi0: local-lvm:vm-100-disk-0,aio=native,cache=writeback,discard=on,iothread=1,size=31G,ssd=1

scsi1: tosh10-xfs-multi:100/vm-100-disk-22.raw,backup=0,cache=writeback,discard=on,size=1G,ssd=0

.

Do some testing with 1..5 VMs and see if I/O is still good

See also:

https://github.com/kneutron/ansitest/blob/master/proxmox/proxmox-migrate-disk-storage.sh

https://github.com/kneutron/ansitest/blob/master/proxmox/bkpcrit-proxmox.sh

Lots of other good stuff in that repo ;-)
 
Last edited:
  • Like
Reactions: markusbernhard
Hi @markusbernhard,

You are not required to change away from io_uring when moving to LVM on iscsi.

The main downside of aio=native, when used with file based storage, it can block inline of io submission in some exceptional cases. This would show up as high latency in the guest. If you go this route, I'd recommend using an io thread.

You might find this overview helpful - https://kb.blockbridge.com/technote/proxmox-aio-vs-iouring/#proxmox-io-options



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thanks @bbgeek17 and @Kingneutron , much appreciated.

I've read the blockbridge link multiple times, but I thought I'd just ask.. :)

I cannot live migrate to LVM on iscsi when the disks are set with io_uring, the GUI just refuses to migrate the disks with a note that says "TASK ERROR: storage migration failed: target storage is known to cause issues with aio=io_uring (used by current drive)". I might be able to do it through pvesm, I didn't try that.

My headache here is that they are customer servers, and we are having trouble with scheduling downtime for all servers at once, and on top of that, not all servers use virtio disks, quite a few utilizes SATA. So IO thread is not an option there - unless I convert them to virtio disks.

Just trying to come up with a decent migration plan here.. I still have about 100 more vm's in vmware to migrate, so I'm not short of work at the moment :)
 
  • Like
Reactions: Kingneutron
I have exactly the same problem. I can shutdown a few of these VM's to switch to aio=native and then start them immediately again, and then do the migration, but I'd rather not have to do this. Anybody has any suggestions ?

I migrated from LVM on iscsi to NFS without shutting down, and now that the LVM storage is moved physically I want to switch them back (without shutting down), why can't I do this ?