Moving a Local VM to an iSCSI Storage Device

snowman

New Member
Jun 4, 2012
4
0
1
Hi,
Is there an easy way to migrate a local VM to an iSCSI volume? I'm trying backup and restore through the web interface, but I'm getting "can't allocate space in iscsi storage" when restoring. The same thing happens when I use the qmrestore command.

This is the output from the web interface when I try to restore a local backup to shared memory:
Code:
extracting archive '/home/backups/dump/vzdump-qemu-100-2012_06_04-12_38_52.tar.lzo'
extracting 'qemu-server.conf' from archive
extracting 'vm-disk-ide0.raw' from archive
can't allocate space in iscsi storage
tar: vm-disk-ide0.raw: Cannot write: Broken pipe
tar: 4784: Child returned status 17
tar: Exiting with failure status due to previous errors
starting cleanup
TASK  ERROR: command 'zcat -f|tar xf  /home/backups/dump/vzdump-qemu-100-2012_06_04-12_38_52.tar.lzo  '--to-command=/usr/lib/qemu-server/qmextract --storage tgt-02'' failed:  exit code 2

I've set up tgt on a different machine and I have no problems setting up an iSCSI volume. I can connect Proxmox to that volume successfully and even set up a VM using that storage device.

What I'm trying to do is 'move' my local virtual machines to my shared storage device. If anyone has any ideas of what might be going wrong, help would be greatly appreciated!

Regards,
snowman
 
It is not possible to manage (create) iSCSI volumes from the pve host (iSCSI does not define an API for such things).

That is why people usually put LVM on top of an iSCSI volume.
 
It is not possible to manage (create) iSCSI volumes from the pve host (iSCSI does not define an API for such things).
I understand. I've been making my iSCSI volumes on a separate machine using tgt.

That is why people usually put LVM on top of an iSCSI volume.
Okay. So I set this up. I created an iSCSI target and then added it to an LVM group. But when I try to restore a machine to that storage it gives me an incredibly long error message. Can you spot what I may have done wrong? I see that it says "insufficient free space" but I'm positive that I have enough space. This is a 12Gb machine that I'm restoring to 20Gb of space.

Code:
extracting archive '/home/backups/dump/vzdump-qemu-100-2012_06_05-11_41_43.tar.lzo'
extracting 'qemu-server.conf' from archive
extracting 'vm-disk-ide0.raw' from archive
  /dev/sdh: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdh: read failed after 0 of 4096 at 8388542464: Input/output error
  /dev/sdh: read failed after 0 of 4096 at 8388599808: Input/output error
  /dev/sdh: read failed after 0 of 4096 at 4096: Input/output error
  Found duplicate PV 5yUGRMvLPDGOBX6oQTdXkLnOtp3BhHdS: using /dev/sdk not /dev/sdi
  /dev/sdl: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdl: read failed after 0 of 4096 at 8388542464: Input/output error
  /dev/sdl: read failed after 0 of 4096 at 8388599808: Input/output error
  /dev/sdl: read failed after 0 of 4096 at 4096: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 1612054528: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 1612161024: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 4096: Input/output error
  /dev/sdh: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdh: read failed after 0 of 4096 at 8388542464: Input/output error
  /dev/sdh: read failed after 0 of 4096 at 8388599808: Input/output error
  /dev/sdh: read failed after 0 of 4096 at 4096: Input/output error
  Found duplicate PV 5yUGRMvLPDGOBX6oQTdXkLnOtp3BhHdS: using /dev/sdk not /dev/sdi
  /dev/sdl: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdl: read failed after 0 of 4096 at 8388542464: Input/output error
  /dev/sdl: read failed after 0 of 4096 at 8388599808: Input/output error
  /dev/sdl: read failed after 0 of 4096 at 4096: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 1612054528: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 1612161024: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 4096: Input/output error
  /dev/sdh: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdh: read failed after 0 of 4096 at 8388542464: Input/output error
  /dev/sdh: read failed after 0 of 4096 at 8388599808: Input/output error
  /dev/sdh: read failed after 0 of 4096 at 4096: Input/output error
  Found duplicate PV 5yUGRMvLPDGOBX6oQTdXkLnOtp3BhHdS: using /dev/sdk not /dev/sdi
  /dev/sdl: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdl: read failed after 0 of 4096 at 8388542464: Input/output error
  /dev/sdl: read failed after 0 of 4096 at 8388599808: Input/output error
  /dev/sdl: read failed after 0 of 4096 at 4096: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 1612054528: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 1612161024: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 4096: Input/output error
  /dev/sdh: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdh: read failed after 0 of 4096 at 8388542464: Input/output error
  /dev/sdh: read failed after 0 of 4096 at 8388599808: Input/output error
  /dev/sdh: read failed after 0 of 4096 at 4096: Input/output error
  Found duplicate PV 5yUGRMvLPDGOBX6oQTdXkLnOtp3BhHdS: using /dev/sdk not /dev/sdi
  /dev/sdl: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdl: read failed after 0 of 4096 at 8388542464: Input/output error
  /dev/sdl: read failed after 0 of 4096 at 8388599808: Input/output error
  /dev/sdl: read failed after 0 of 4096 at 4096: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 1612054528: Input/output error
  /dev/sdm: read failed after 0 of 4096 at 1612161024: Input/output error
  Rounding up size to full physical extent 32.00 GiB
  /dev/sdm: read failed after 0 of 4096 at 4096: Input/output error
lvcreate  'ubuntu-20G-VG/pve-vm-104' error:   Volume group "ubuntu-20G-VG" has  insufficient free space (5119 extents): 8193 required.
tar: vm-disk-ide0.raw: Cannot write: Broken pipe
tar: write error
tar: 91901: Child returned status 5
tar: Exiting with failure status due to previous errors
starting cleanup
TASK  ERROR: command 'zcat -f|tar xf  /home/backups/dump/vzdump-qemu-100-2012_06_05-11_41_43.tar.lzo  '--to-command=/usr/lib/qemu-server/qmextract --storage lvm-ubuntu-20G''  failed: exit code 2

Thanks in advance,
snowman
 
I think I figured out (part of) my own problem. I'm pretty sure I had three problems that were causing errors. So far I've fixed two of them.
By removing a volume group that wasn't in use I somehow managed to get rid of all of the input/output errors.
Code:
vgremove <volumegroup>
Then I got rid of the duplicate physical volume. I displayed all my physical volumes using
Code:
pvdisplay
I had to make sure it wasn't part of a volume group. Next, I found out which one was duplicated and causing problems (it was /dev/sdh for me)
Code:
pvremove /dev/sdh

And now all that's left is the "insufficient free space" error. I'll let you know if/when I make progress with that one.

snowman
 
I think I figured out (part of) my own problem. I'm pretty sure I had three problems that were causing errors. So far I've fixed two of them.
By removing a volume group that wasn't in use I somehow managed to get rid of all of the input/output errors.
Code:
vgremove <volumegroup>
Then I got rid of the duplicate physical volume. I displayed all my physical volumes using
Code:
pvdisplay
I had to make sure it wasn't part of a volume group. Next, I found out which one was duplicated and causing problems (it was /dev/sdh for me)
Code:
pvremove /dev/sdh

And now all that's left is the "insufficient free space" error. I'll let you know if/when I make progress with that one.

snowman
Hi,
do you have unused LVs on the VG which you can remove to get morew free space? Post the output of
Code:
pvs
vgs
lvs
Udo
 
...
I had to make sure it wasn't part of a volume group. Next, I found out which one was duplicated and causing problems (it was /dev/sdh for me)
Code:
pvremove /dev/sdh
...
Hi,
do you have multipath for the iScsi-raid? In this case you remove the lvm-signature of the device?! Is the raid (VG) usable after reboot? (I would do an backup before).

Udo
 
I fixed my last error about the insufficient space. Everything is working now. I just tried it with a new backup of a smaller machine. I think I was accidentally trying to restore a 32Gb machine to 20Gb of space. Now I've restored an 8Gb machine to my 20Gb drive.

lvs
Code:
  LV            VG            Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert
  data          pve           -wi-ao-- 808.02g                                           
  root          pve           -wi-ao--  96.00g                                           
  swap          pve           -wi-ao--  11.00g                                           
  vm-101-disk-1 ubuntu-20G-VG -wi-ao--   8.00g



pvs
Code:
  PV         VG            Fmt  Attr PSize   PFree 
  /dev/sda2  pve           lvm2 a--  931.01g 16.00g
  /dev/sde   ubuntu-20G-VG lvm2 a--   20.00g 11.99g

vgs
Code:
  VG            #PV #LV #SN Attr   VSize   VFree 
  pve             1   3   0 wz--n- 931.01g 16.00g
  ubuntu-20G-VG   1   1   0 wz--n-  20.00g 11.99g

However, when I run these commands on my other node, I get a lot of input/output errors.
 
Last edited:
It is not possible to manage (create) iSCSI volumes from the pve host (iSCSI does not define an API for such things).

That is why people usually put LVM on top of an iSCSI volume.

Hey :)

at first, proxmox is a great product .. i use it in my small to medium sized company and its doing a good job...

now to my question: could you explain me in deeper details why it is not possible to restore a raw image to an iscsi raw formatted target? i do it also with a lvm group and it does work but i am interested why it does not function with direct restore to an iscsi target without creating a lvm volume group on it...

nice evening ... best regards tobi
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!