Hi,
I've been testing gluster as storage backend for my proxmox cluster. Everything looks good except vm images can't be moved from gluster to another storage.
Here is the error message:
create full clone of drive scsi0 (gl_ssd:10303/vm-10303-disk-1.qcow2)
transferred: 0 bytes remaining: 17179869184 bytes total: 17179869184 bytes progression: 0.00 %
qemu-img: block/gluster.c:1290: find_allocation: Assertion `offs >= start' failed.
TASK ERROR: storage migration failed: copy failed: command '/usr/bin/qemu-img convert -p -n -f qcow2 -O raw gluster://172.16.0.161/gl_ssd/images/10303/vm-10303-disk-1.qcow2 zeroinit:/dev/zvol/rpool/data/vm-10303-disk-1' failed: got signal 6
Google gave this Red Hat bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1451191
Gluster version (installed from gluster repo):
glusterfs-client 3.12.3-1
glusterfs-server 3.12.3-1
pveversion (running older kernel because of high io load on zfs storage):
proxmox-ve: not correctly installed (running kernel: 4.10.17-5-pve)
pve-manager: 5.1-36 (running version: 5.1-36/131401db)
pve-kernel-4.10.17-5-pve: 4.10.17-25
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.12.3-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9
openvswitch-switch: 2.7.0-2
Any idea how to fix this?
I've been testing gluster as storage backend for my proxmox cluster. Everything looks good except vm images can't be moved from gluster to another storage.
Here is the error message:
create full clone of drive scsi0 (gl_ssd:10303/vm-10303-disk-1.qcow2)
transferred: 0 bytes remaining: 17179869184 bytes total: 17179869184 bytes progression: 0.00 %
qemu-img: block/gluster.c:1290: find_allocation: Assertion `offs >= start' failed.
TASK ERROR: storage migration failed: copy failed: command '/usr/bin/qemu-img convert -p -n -f qcow2 -O raw gluster://172.16.0.161/gl_ssd/images/10303/vm-10303-disk-1.qcow2 zeroinit:/dev/zvol/rpool/data/vm-10303-disk-1' failed: got signal 6
Google gave this Red Hat bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1451191
Gluster version (installed from gluster repo):
glusterfs-client 3.12.3-1
glusterfs-server 3.12.3-1
pveversion (running older kernel because of high io load on zfs storage):
proxmox-ve: not correctly installed (running kernel: 4.10.17-5-pve)
pve-manager: 5.1-36 (running version: 5.1-36/131401db)
pve-kernel-4.10.17-5-pve: 4.10.17-25
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.12.3-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9
openvswitch-switch: 2.7.0-2
Any idea how to fix this?