Proxmox VE with TrueNAS Proxmox VE Storage Plugin issue - when powered-off VMs are moved

wla

New Member
Mar 19, 2026
1
0
1
PVE 9.1.6 with iSCSI Trunenas Plugin 2.06
Configuration:
Code:
┌────────────────────┐
│ PVE1 9.1.6         │vmbr0: iscsi(VLAN 110) 192.168.110.12/24
│               NIC0 x─────────────────────────────────────────
│ with               │        mgmt(VLAN 20)  172.21.20.132/24
│ truenasplugin 2.06 │        VMs (many VLANs)
│                    │
│                    │
│                    │vmbr1: iscsi(VLAN 111) 192.168.111.12/24
│               NIC1 x─────────────────────────────────────────
│                    │
└────────────────────┘

(second, identical PVE2 planned)


┌────────────────────┐
│ truenas scale NIC0 x─────┐ mgmt+smb (VLAN 25) 172.21.25.28/24
│ 25.10.2.1          │bond0x───────────────────────────────────
│               NIC1 x─────┘
│                    │
│                    │         iscsi (VLAN 110) 192.168.110.112/24
│               NIC2 x─────────────────────────────────────────
│                    │
│               NIC3 x─────────────────────────────────────────
│                    │         iscsi (VLAN 111) 192.168.111.112/24
└────────────────────┘

TrunenasPlugin 2.06 configured.
Multipath works
fine, network traffic is distributed across both NICs.
! But, when move a disk (storage vmotion), of a powered off(!) VM, this will fail with an error. If you repeat the process while the VM is powered on, it will work.
It doesn't matter whether you use multipath or not.
The difference between the two cases is the way in which they are handled. See below with/without "mirror-virtio1".

Powered ON, works fine (log from Task viewer):
create full clone of drive virtio1 (s1533-ssd-iscsi:vol-vm-3001-disk-1-lun4)
drive mirror is starting for drive-virtio1
mirror-virtio1:
transferred 0.0 B of 96.0 GiB (0.00%) in 0s
mirror-virtio1: transferred 1.0 GiB of 96.0 GiB (1.05%) in 1s
mirror-virtio1: transferred 2.0 GiB of 96.0 GiB (2.13%) in 2s
mirror-virtio1: transferred 3.1 GiB of 96.0 GiB (3.26%) in 3s
mirror-virtio1: transferred 4.2 GiB of 96.0 GiB (4.36%) in 4s
mirror-virtio1: transferred 5.2 GiB of 96.0 GiB (5.46%) in 5s
:

! Powered OFF, fails:
create full clone of drive virtio1 (s1533-ssd-iscsi:vol-vm-3001-disk-1-lun4)
transferred 0.0 B of 96.0 GiB (0.00%)
transferred 983.0 MiB of 96.0 GiB (1.00%)
transferred 1.9 GiB of 96.0 GiB (2.00%)
qemu-img: error while writing at byte 2910846464: Device or resource busy
TASK ERROR: storage migration failed: copy failed: command '/usr/bin/qemu-img convert -p -n -f raw -O raw /dev/mapper/mpathef /dev/mapper/mpatheg' failed: exit code 1

Does anyone else have this problem, or does this work for anyone else?
Thanks for the replies!
 
Hi wla,

Welcome to the forums!

I have no similar setup to experience the issue, but I do have additional questions:
  • Is the move performed over the management network in both cases?
  • Does the failure persist when running a degraded bond, ie, disconnecting one of either NIC0 or NIC1?