move lvm vg to another iscsi storage

kvone

New Member
Sep 18, 2013
6
0
1
Hello!

I met with the following problem:
has the storage model: iscsi storage(1) -> lvm pool -> vm's discs
I need to transfer lvm pool to another iscsi storage(2)
in the terminal I ran pvmove command and freed iscsi storage(1). that I do in PVE interface that would move lvm pool to another iscsi storage(2) and be able to remove iscsi storage(1)?

Thx.
 
from web gui you can move (even live) vm disks from uno storage to another, and/or even change disk type in the move.
http://pve.proxmox.com/wiki/Storage_Migration

afaik, you are thus not moving lvm pools, but just copying raw disks on lvm logical volumes to/from different storages, and/or enven different disks formats, having also the chance to delete the moved/converted original disk after success, which will so become unused, if needed (otherwise you could remove it by hand, after.)

Marco
 
online disk move not working properly...
i get an error:
Code:
create full clone of drive virtio0 (pool-vds-2-2:vm-101-disk-1)
  Logical volume "vm-101-disk-1" created
transferred: 0 bytes remaining: 107374182400 bytes total: 107374182400 bytes progression: 0.00 %
transferred: 41943040 bytes remaining: 107332239360 bytes total: 107374182400 bytes progression: 0.04 %
transferred: 73400320 bytes remaining: 107300782080 bytes total: 107374182400 bytes progression: 0.07 %
transferred: 115343360 bytes remaining: 107258839040 bytes total: 107374182400 bytes progression: 0.11 %
transferred: 136314880 bytes remaining: 107237867520 bytes total: 107374182400 bytes progression: 0.13 %
...
...
transferred: 107276795904 bytes remaining: 97386496 bytes total: 107374182400 bytes progression: 99.91 %
transferred: 107329224704 bytes remaining: 44957696 bytes total: 107374182400 bytes progression: 99.96 %
transferred: 107350196224 bytes remaining: 23986176 bytes total: 107374182400 bytes progression: 99.98 %
transferred: 107371167744 bytes remaining: 3014656 bytes total: 107374182400 bytes progression: 100.00 %
transferred: 107374182400 bytes remaining: 0 bytes total: 107374182400 bytes progression: 100.00 %
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  Logical volume "vm-101-disk-1" successfully removed
  device-mapper: remove ioctl on  failed: Device or resource busy
TASK  ERROR: storage migration failed: mirroring error: VM 101 qmp command  'block-job-complete' failed - The active block job for device  'drive-virtio0' cannot be completed
it is a bug?
 
online moving is not always working reliable, so far the only workaround is doing the move when the VM is down.
 
Same problem still exists.. Trying to clone running VM to another node and lvm target..
proxmox-ve-2.6.32: 3.4-156 (running kernel: 2.6.32-39-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-39-pve: 2.6.32-156
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1


transferred: 10740039680 bytes remaining: 0 bytes total: 10740039680 bytes progression: 100.00 % busy: false ready: true
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
....
device-mapper: remove ioctl on failed: Device or resource busy
can't deactivate LV '/dev/lvm-1/vm-801-disk-1': Unable to deactivate lvm--1-vm--801--disk--1 (253:8)
Couldn't find device with uuid XFWM0E-Re1D-m0cN-FcV4-GSqf-z9ZL-kfoE0y.
Logical volume "vm-124-disk-1" successfully removed
TASK ERROR: clone failed: volume deativation failed: lvm-1:vm-801-disk-1 at /usr/share/perl5/PVE/Storage.pm line 869.

Source VM = 801
Target VM = 124

No clone done..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!