iscsi issue

eislon

Active Member
May 17, 2009
44
0
26
Netherlands
eislon.comze.com
I like to have my vm's on an iscsi target , so I found a thread here for proxmox v1.x I have installed the latest proxmox 2.1 and have done the following:

- on my QNAP nas I created a iscsi with a lun attached to it.
- entered the info needed in proxmox web admin.
- made a LVM group
- but now it just sees the drive for images not for containers, how can I fix this?

In the thread for proxmox v1 I followed this:
"


quote_icon.png
Originally Posted by tom
See http://pve.proxmox.com/wiki/Storage_...etwork_Backing



"

But I can't change or add the containers section to the lvm group or iscsi target!

When I try to download a template on the ISCSI target or LVM Group I get the following message:
cannot download to storage type 'iscsi' at /usr/share/perl5/PVE/API2/Nodes.pm line 788. (500)
 
Last edited:
I like to have my vm's on an iscsi target , so I found a thread here for proxmox v1.x I have installed the latest proxmox 2.1 and have done the following:

- on my QNAP nas I created a iscsi with a lun attached to it.
- entered the info needed in proxmox web admin.
- made a LVM group
- but now it just sees the drive for images not for containers, how can I fix this?

In the thread for proxmox v1 I followed this:
"


quote_icon.png
Originally Posted by tom
See http://pve.proxmox.com/wiki/Storage_...etwork_Backing



"

But I can't change or add the containers section to the lvm group or iscsi target!

When I try to download a template on the ISCSI target or LVM Group I get the following message:
cannot download to storage type 'iscsi' at /usr/share/perl5/PVE/API2/Nodes.pm line 788. (500)
Hi,
for containers you need local storage or nfs-storage (better, because you can use online-migration).
LVM-Storage is for kvm only.

Udo
 
So your advice is to mount the iscsi target to a directory locally and then it could be used for containers.
Should this then be done on all cluster nodes? cause I thought for online migration it is best to have an iscsi-target so that just the vm information is pushed to the other node and the data stays on the iscsi target (my QNAP).
 
So your advice is to mount the iscsi target to a directory locally and then it could be used for containers.
Should this then be done on all cluster nodes? cause I thought for online migration it is best to have an iscsi-target so that just the vm information is pushed to the other node and the data stays on the iscsi target (my QNAP).

Hi,
if you wan't to mount an blockdevice (like a iscsi-lun) on more than one system on the same time, you need an cluster-filesystem!
This is the reason why nfs is much easier to use.

Udo
 
Hi,
if you wan't to mount an blockdevice (like a iscsi-lun) on more than one system on the same time, you need an cluster-filesystem!
This is the reason why nfs is much easier to use.

Udo

Hmm but I thought proxmox had a cluster fs, so I need to figure out how this works!
Can I use a iscsi-lun as nfs between more nodes?
 
You can configure your QNAP to have both an iSCSI LUN and a NFS share at the same time. This setup will give provide possibility for online migration of both KVM and VZ. KVM can you the iSCSI storage and VZ can you the NFS share.
 
Mir,

Thanks I just tried it, on two virtual proxmox machines, (2.1) and the nfs share is working, but it seems with live migration that it holds the vm!
Have to look a bit futher why, but basically the nfs share works best right now for the two nodes, going to test somethings .

Thanks again.
 
Mir,

Thanks I just tried it, on two virtual proxmox machines, (2.1) and the nfs share is working, but it seems with live migration that it holds the vm!
Do you by any change have a cdrom attached your VM? An attached cdrom can obviously not be migrated from on host to another. A migrateable VM should loke something like the attached screen shot.

Screenshot - 2012-08-18 - 12:11:29.png
 
Do you by any change have a cdrom attached your VM? An attached cdrom can obviously not be migrated from on host to another. A migrateable VM should loke something like the attached screen shot.

View attachment 1090

No there is no cdrom attached, a migration looks like this:
"
Aug 17 18:33:52 starting migration of CT 154 to node 'eismox2' (192.168.1.61)
Aug 17 18:33:52 container is running - using online migration
Aug 17 18:33:52 container data is on shared storage 'qnapNFS-proxmox1'
Aug 17 18:33:53 start live migration - suspending container
Aug 17 18:33:53 dump container state
Aug 17 18:33:53 dump 2nd level quota
Aug 17 18:33:54 initialize container on remote node 'eismox2'
Aug 17 18:33:54 initializing remote quota
Aug 17 18:34:13 turn on remote quota
Aug 17 18:34:13 load 2nd level quota
Aug 17 18:34:13 starting container on remote node 'eismox2'
Aug 17 18:34:13 restore container state
Aug 17 18:34:13 start final cleanup
Aug 17 18:34:13 migration finished successfuly (duration 00:00:21)
TASK OK
"
it tells me that the container is being suspended, and very fast some kind of a messages which I can't read that fast, and havent found it in a log yet.. but I keep on searching.

 
I see no error here - what do I miss?

please use the right terms for KVM guest and OpenVZ containers - otherwise you confuse people.

For KVM guests we use VM, for OpenVZ containers we use CT.
 
I see no error here - what do I miss?

please use the right terms for KVM guest and OpenVZ containers - otherwise you confuse people.

For KVM guests we use VM, for OpenVZ containers we use CT.

Hi Tom,

in the log is no error, but that is not the issue , what i see is that the CT is suspended, is this how online migration works?
"
Aug 17 18:33:52 container is running - using online migration
Aug 17 18:33:52 container data is on shared storage 'qnapNFS-proxmox1'
Aug 17 18:33:53 start live migration - suspending container
"

Or am I wrong that online migration does online works with KVM?
After migration, the CT is brought online again!

I have made test in the early 1.7 and > version of proxmox, then the ping time just got a bit higher, but now it really stops!

Sometimes when I try moving the vm with online migration it tells me following:
"
Aug 18 19:05:29 vzquota : (error) Quota is running, stop it first
Aug 18 19:05:29 ERROR: online migrate failure - Failed to initialize quota: vzquota init failed [5]
Aug 18 19:05:29 start final cleanup
Aug 18 19:05:29 ERROR: migration finished with problems (duration 00:00:01)
TASK ERROR: migration problems
"

Then I first have to start the CT on the node it is on, and then retry a migration, then it works.
Perhaps this might be an issue of having two proxmox nodes as vm (under vmware fusion) and they came out of suspend mode!
I will look into the logging of the proxmox vm's to check this out.
 
Then I first have to start the CT on the node it is on, and then retry a migration, then it works.
Perhaps this might be an issue of having two proxmox nodes as vm (under vmware fusion) and they came out of suspend mode!
I will look into the logging of the proxmox vm's to check this out.
If you are running pve in another hypervisor then you are running into uncharted areas. What is the purpose of running a hyperviser inside another hypervisor?
 
first of all to test the new proxmox and secondly it is also mentioned as the way to create openvz containers !
Let me tell you that it works, but I was a bit suprised that the CT was suspended with a live migration, perhaps my previous tests (along time ago) were with kvm virtual machines and that worked with virtual proxmox environments! basically the tests are working good, so it will go to hardware, but shared storage is new to me with a nas, so that why i have to test this first and vms are easier to setup and available than hardware (even when we just use linux CTs ;-)

is clusterfs really hard to learn?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!