Hi,
with this pve version:
Proxmox will systematically cancel migrations of big drives > 16 TB
In our example the storage migration goes from zfs to nfs.
Without any real error, proxmox will just cancel it with:
Showing in the log:
This was working before with a late version 7. And somewhere during 8 it seems there was an artificial limit introduced ?!
IF this is really the case, the questions would be:
1. Howto live-migrate it?
2. IF the limit really exist, why even starting a migration that is doomed to fail? PVE checks the size before transfer and will know anyway its going to fail.
Maybe someone can bring some light into this.. thank you!
with this pve version:
Code:
proxmox-ve: 8.4.0 (running kernel: 6.8.12-10-pve)
pve-manager: 8.4.5 (running version: 8.4.5/57892e8e686cb35b)
proxmox-kernel-helper: 8.1.4
proxmox-kernel-6.8.12-13-pve-signed: 6.8.12-13
proxmox-kernel-6.8: 6.8.12-13
proxmox-kernel-6.8.12-10-pve-signed: 6.8.12-10
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
ceph-fuse: 17.2.7-pve3
corosync: 3.1.9-pve1
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.30-pve2
libproxmox-acme-perl: 1.6.0
libproxmox-backup-qemu0: 1.5.2
libproxmox-rs-perl: 0.3.5
libpve-access-control: 8.2.2
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.1.2
libpve-cluster-perl: 8.1.2
libpve-common-perl: 8.3.2
libpve-guest-common-perl: 5.2.2
libpve-http-server-perl: 5.2.2
libpve-network-perl: 0.11.2
libpve-rs-perl: 0.9.4
libpve-storage-perl: 8.3.6
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.6.0-2
proxmox-backup-client: 3.4.3-1
proxmox-backup-file-restore: 3.4.3-1
proxmox-backup-restore-image: 0.7.0
proxmox-firewall: 0.7.1
proxmox-kernel-helper: 8.1.4
proxmox-mail-forward: 0.3.3
proxmox-mini-journalreader: 1.5
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.12
pve-cluster: 8.1.2
pve-container: 5.3.0
pve-docs: 8.4.0
pve-edk2-firmware: 4.2025.02-4~bpo12+1
pve-esxi-import-tools: 0.7.4
pve-firewall: 5.1.2
pve-firmware: 3.16-3
pve-ha-manager: 4.0.7
pve-i18n: 3.4.5
pve-qemu-kvm: 9.2.0-7
pve-xtermjs: 5.5.0-2
qemu-server: 8.4.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.8-pve1
Proxmox will systematically cancel migrations of big drives > 16 TB
In our example the storage migration goes from zfs to nfs.
Without any real error, proxmox will just cancel it with:
Code:
storage migration failed: block job (mirror) error: drive-virtio1: File too large (io-status: ok)
Showing in the log:
Code:
drive-virtio1: transferred 16.0 TiB of 29.3 TiB (54.59%) in 6h 8m 38s
drive-virtio1: transferred 16.0 TiB of 29.3 TiB (54.59%) in 6h 8m 39s
drive-virtio1: transferred 16.0 TiB of 29.3 TiB (54.60%) in 6h 8m 40s
drive-virtio1: Cancelling block job
drive-virtio1: Done.
TASK ERROR: storage migration failed: block job (mirror) error: drive-virtio1: File too large (io-status: ok)
This was working before with a late version 7. And somewhere during 8 it seems there was an artificial limit introduced ?!
IF this is really the case, the questions would be:
1. Howto live-migrate it?
2. IF the limit really exist, why even starting a migration that is doomed to fail? PVE checks the size before transfer and will know anyway its going to fail.
Maybe someone can bring some light into this.. thank you!