Unable to clone or migrate storage of VM on LVM on iSCSI storage

hnguk

New Member
Feb 6, 2026
7
2
3
I usually run LXCs and haven't run into the same issue but basically when trying to clone or move storage of a VM it fails at the 2GB point.

Details:
3 node Proxmox cluster on Proxmox 9.1.5:
Code:
proxmox-ve: 9.1.0 (running kernel: 6.17.4-2-pve)
pve-manager: 9.1.5 (running version: 9.1.5/80cf92a64bef6889)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.17.4-2-pve-signed: 6.17.4-2
proxmox-kernel-6.17: 6.17.4-2
proxmox-kernel-6.17.4-1-pve-signed: 6.17.4-1
proxmox-kernel-6.17.2-2-pve-signed: 6.17.2-2
proxmox-kernel-6.17.2-1-pve-signed: 6.17.2-1
proxmox-kernel-6.14.11-5-pve-signed: 6.14.11-5
proxmox-kernel-6.14: 6.14.11-5
proxmox-kernel-6.14.11-4-pve-signed: 6.14.11-4
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
ceph-fuse: 19.2.3-pve2
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.4.1-1+pve1
ifupdown2: 3.3.0-1+pmx11
intel-microcode: 3.20251111.1~deb13u1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.2
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.5
libpve-apiclient-perl: 3.4.2
libpve-cluster-api-perl: 9.0.7
libpve-cluster-perl: 9.0.7
libpve-common-perl: 9.1.7
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.2.5
libpve-rs-perl: 0.11.4
libpve-storage-perl: 9.1.0
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-4
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.1.2-1
proxmox-backup-file-restore: 4.1.2-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.1
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.3
proxmox-widget-toolkit: 5.1.5
pve-cluster: 9.0.7
pve-container: 6.1.0
pve-docs: 9.1.2
pve-edk2-firmware: 4.2025.05-2
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.4
pve-firmware: 3.17-2
pve-ha-manager: 5.1.0
pve-i18n: 3.6.6
pve-qemu-kvm: 10.1.2-5
pve-xtermjs: 5.5.0-3
qemu-server: 9.1.4
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve3
vncterm: 1.9.1
zfsutils-linux: 2.3.4-pve1

The storage is TrueNAS running 25.10.0 sharing a single vdev/extend per target with LVM on top of each.

This is the error showing in the log for the clone task in the WebUI:

Code:
create full clone of drive scsi0 (disk-store:vm-9000-disk-1)
  Wiping ext4 signature on /dev/proxmox/vm-122-disk-0.
  Logical volume "vm-122-disk-0" created.
transferred 0.0 B of 16.0 GiB (0.00%)
transferred 163.8 MiB of 16.0 GiB (1.00%)
transferred 327.7 MiB of 16.0 GiB (2.00%)
transferred 491.5 MiB of 16.0 GiB (3.00%)
transferred 655.4 MiB of 16.0 GiB (4.00%)
transferred 819.2 MiB of 16.0 GiB (5.00%)
transferred 984.7 MiB of 16.0 GiB (6.01%)
transferred 1.1 GiB of 16.0 GiB (7.01%)
transferred 1.3 GiB of 16.0 GiB (8.01%)
transferred 1.4 GiB of 16.0 GiB (9.01%)
transferred 1.6 GiB of 16.0 GiB (10.01%)
transferred 1.8 GiB of 16.0 GiB (11.01%)
transferred 1.9 GiB of 16.0 GiB (12.01%)
qemu-img: error while writing at byte 2147483136: Invalid argument
  Logical volume "vm-122-disk-0" successfully removed.
TASK ERROR: clone failed: copy failed: command '/usr/bin/qemu-img convert -p -n -f raw -O raw /dev/proxmox/vm-9000-disk-1 /dev/proxmox/vm-122-disk-0' failed: exit code 1

I am not fully sure where the issue is however when I run a dd to /dev/null the command completes fine and I have a total of ~22 LXCs running their root disks perfectly fine and as mentioned I can clone & move storage for them perfectly fine.

Code:
root@pve01:~# dd if=/dev/proxmox/vm-9000-disk-1 of=/dev/null bs=4M status=progress
17007902720 bytes (17 GB, 16 GiB) copied, 58 s, 293 MB/s
4096+0 records in
4096+0 records out
17179869184 bytes (17 GB, 16 GiB) copied, 58.5855 s, 293 MB/s

Not sure what other information to provide but hopefully this is enough to assist or ask questions :D
 
There were recently some bug fixes in this area. Make sure you are running with the latest code.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I have just updated one of my Proxmox nodes and still getting the same error unless you mean TrueNAS?

Proxmox node new package list:
Code:
proxmox-ve: 9.1.0 (running kernel: 6.17.9-1-pve)
pve-manager: 9.1.5 (running version: 9.1.5/80cf92a64bef6889)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.17.9-1-pve-signed: 6.17.9-1
proxmox-kernel-6.17: 6.17.9-1
proxmox-kernel-6.17.4-2-pve-signed: 6.17.4-2
proxmox-kernel-6.17.4-1-pve-signed: 6.17.4-1
proxmox-kernel-6.17.2-2-pve-signed: 6.17.2-2
proxmox-kernel-6.17.2-1-pve-signed: 6.17.2-1
proxmox-kernel-6.14.11-5-pve-signed: 6.14.11-5
proxmox-kernel-6.14: 6.14.11-5
proxmox-kernel-6.14.11-4-pve-signed: 6.14.11-4
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
ceph-fuse: 19.2.3-pve4
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.4.1-1+pve1
ifupdown2: 3.3.0-1+pmx12
intel-microcode: 3.20251111.1~deb13u1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.2
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.5
libpve-apiclient-perl: 3.4.2
libpve-cluster-api-perl: 9.0.7
libpve-cluster-perl: 9.0.7
libpve-common-perl: 9.1.7
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.2.5
libpve-rs-perl: 0.11.4
libpve-storage-perl: 9.1.0
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-4
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.1.2-1
proxmox-backup-file-restore: 4.1.2-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.1
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.3
proxmox-widget-toolkit: 5.1.5
pve-cluster: 9.0.7
pve-container: 6.1.0
pve-docs: 9.1.2
pve-edk2-firmware: 4.2025.05-2
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.4
pve-firmware: 3.17-2
pve-ha-manager: 5.1.0
pve-i18n: 3.6.6
pve-qemu-kvm: 10.1.2-5
pve-xtermjs: 5.5.0-3
qemu-server: 9.1.4
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve3
vncterm: 1.9.1
zfsutils-linux: 2.4.0-pve1
 



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Okay thanks for the information, I'll update the cluster and TrueNAS. Will try again and if it still fails then I guess wait for a future update?
 
It didn't seem that either reporter filed a bug with PVE. You may want to do so. I doubt the backend SAN version change will have any affect on this behavior.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Where and what would be the correct process in raising a bug report (I have never done one before :confused:)

I have updated the entire cluster(bar pve-container since it was causing an issue due to another bug meaning I couldn't start containers with bind mounts) and TrueNAS to 25.10.1 and the issue definitely still persists. It's interesting though that LXCs do not do this.

Latest VM clone test:
Code:
create full clone of drive scsi0 (disk-store:vm-9000-disk-1)
  Wiping PMBR signature on /dev/proxmox/vm-122-disk-0.
  Logical volume "vm-122-disk-0" created.
transferred 0.0 B of 16.0 GiB (0.00%)
transferred 163.8 MiB of 16.0 GiB (1.00%)
transferred 327.7 MiB of 16.0 GiB (2.00%)
transferred 491.5 MiB of 16.0 GiB (3.00%)
transferred 655.4 MiB of 16.0 GiB (4.00%)
transferred 819.2 MiB of 16.0 GiB (5.00%)
transferred 984.7 MiB of 16.0 GiB (6.01%)
transferred 1.1 GiB of 16.0 GiB (7.01%)
transferred 1.3 GiB of 16.0 GiB (8.01%)
transferred 1.4 GiB of 16.0 GiB (9.01%)
transferred 1.6 GiB of 16.0 GiB (10.01%)
transferred 1.8 GiB of 16.0 GiB (11.01%)
transferred 1.9 GiB of 16.0 GiB (12.01%)
qemu-img: error while writing at byte 2147483136: Invalid argument
  Logical volume "vm-122-disk-0" successfully removed.
TASK ERROR: clone failed: copy failed: command '/usr/bin/qemu-img convert -p -n -f raw -O raw /dev/proxmox/vm-9000-disk-1 /dev/proxmox/vm-122-disk-0' failed: exit code 1

Latest LXC clone test:
Code:
create full clone of mountpoint rootfs (disk-store:vm-118-disk-0)
  Wiping PMBR signature on /dev/proxmox/vm-122-disk-0.
  Logical volume "vm-122-disk-0" created.
Creating filesystem with 2621440 4k blocks and 655360 inodes
Filesystem UUID: d10cec37-b913-4223-90da-036aa8de580c
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Number of files: 52,302 (reg: 41,394, dir: 7,252, link: 3,632, special: 24)
Number of created files: 52,300 (reg: 41,394, dir: 7,250, link: 3,632, special: 24)
Number of deleted files: 0
Number of regular files transferred: 41,385
Total file size: 4,166,474,884 bytes
Total transferred file size: 4,161,942,490 bytes
Literal data: 4,161,942,490 bytes
Matched data: 0 bytes
File list size: 2,097,054
File list generation time: 0.003 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 4,166,038,681
Total bytes received: 845,919

sent 4,166,038,681 bytes  received 845,919 bytes  46,042,923.76 bytes/sec
total size is 4,166,474,884  speedup is 1.00
TASK OK

Confirmed that the LXC does boot and the service running on said LXC(Proxmox Datacentre Manager) works.
 
Where and what would be the correct process in raising a bug report (I have never done one before :confused:)
https://pve.proxmox.com/pve-docs/getting-help-plain.html
https://pve.proxmox.com/wiki/Introduction#getting_help


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox