Adding iSCSI storage to HA cluster

curlingcanteen

New Member
Feb 28, 2013
27
1
1
I've got three boxes with fencing configured and tested from the command line (not sure if it's possible to test it via Proxmox)
They're each connected to a 1TB LUN via iSCSI using multipath.
I've manually created a volume group on my multipathed device:
Code:
gdisk /dev/dm-3 
pvcreate /dev/dm-3 
vgcreate vmdisks /dev/dm-3
I then added the iSCSI device through the GUI, and created a volume group with a matching name [vmdisks] by following this guide.
The disk shows up as available on each node, however I don't understand how to access that storage through the GUI. If I try to add a new VZ container it only allows me to choose Local storage.

How can I move a disk into the shared storage?
 
Thanks for the explaination. When attempting to create a VM and adding the storage, it comes back with
Code:
TASK ERROR: create failed - no such  volume group 'vmdisks'
, which is why I don't think I added the storage correctly.

The image in the linked guide above suggests that the selection of the Volume Group Name comes from a populated drop-down box, however I don't get such a choice. Instead it is a unpopulated text-entry box in which I typed in the name of the VG:

vmdisks.png
 
I can help you fix this but you're missing the point.

iSCSI is block level storage but VZ requires regular storage. For iscsi, you need to create an iscsi device then create an LVM on top of that iscsi but you won't be able to create VZ VM's on it
 
I can help you fix this but you're missing the point.

iSCSI is block level storage but VZ requires regular storage. For iscsi, you need to create an iscsi device then create an LVM on top of that iscsi but you won't be able to create VZ VM's on it

Thansk for being gentle ejc. I know I'm missing the point, which is why I'm asking for help.

I now understand that I'd need an NFS to store CTs, and that I will only be able to add VMs to LVM, which is why I clicked "Create VM" instead of "Create CT"

I think my question is: how do I make the LVM available to store VMs in it? Is it expected that I do not recieve a pre-populated drop-down, as shown in the image in my previous post? Or might I have a misconfiguration in my iSCSI / LVM setup?
 
Thansk for being gentle ejc. I know I'm missing the point, which is why I'm asking for help.

I now understand that I'd need an NFS to store CTs, and that I will only be able to add VMs to LVM, which is why I clicked "Create VM" instead of "Create CT"

I think my question is: how do I make the LVM available to store VMs in it? Is it expected that I do not recieve a pre-populated drop-down, as shown in the image in my previous post? Or might I have a misconfiguration in my iSCSI / LVM setup?

You need to partition the lvm to provide a filesystem like ext4, xfs etc. before you can add that disk to the vm.

HTH
 
Well now that's confusing. The Proxmox Wiki clearly states:
This is a two step procedure and can be fully configured via the web interface.

So your comment about manually formatting the partiton is curious. Are the steps actually:
  1. Create LUN on your SAN
  2. Add iSCSI through Prox web interface: iSCSI_LVM
    • All Nodes, Enabled[y], Use LUNs Directly[n]
  3. Drop to cmd and do your partitioning,LVMing, and formatting of the disk:
    • gdisk /dev/sde # type=8e00
    • pvcreate /dev/dm-4 # where dm-4 is the new numbered partition
    • vgcreate VolGrp /dev/dm-4
    • lvcreate -n LogicalGrp -l 100%FREE VolGrp
    • mkfs.ext4 /dev/VolGrp/LogicalGrp
  4. Back in the Prox web interface to add the LVM volume:
    • ID: ProxLogVol
    • Base Storage: Select (iSCSI) from drop-down
    • Base Volume: Select from drop-down
    • Volume Group: ProxLogVol_Grp
      • All nodes, Enabled[y], Shared[y]
  5. Create VMs with disks on the LVM storage

If so, I'm getting this error at step #4:
create storage failed: pvcreate '/dev/disk/by-id/scsi-3690b11c00039d706000031ce542524c6' error: Can't open /dev/disk/by-id/scsi-3690b11c00039d706000031ce542524c6 exclusively. Mounted filesystem? (500)
I did not mount the new ext4, although I did reboot several nodes to allow the kernel to see the new partition.
 
I've pretty much got the same siuation as curlingcanteen. My iscsi server is running ubuntu 1404, running open-iscsi. Locally on the box, we have a zfs store on which I've created a couple of raw block devices and have labeled them as iscsi targets.

On the iscsi server:

# zfs create -V 10G Performance/ubuntu1
# zfs create -V 200G Performance/pvol-test


# ls -l /dev/zvol/Performance/pvol-test
lrwxrwxrwx 1 root root 10 Oct 17 22:37 /dev/zvol/Performance/pvol-test -> ../../zd16
t# ls -l /dev/zvol/Performance/ubuntu1
lrwxrwxrwx 1 root root 9 Oct 17 22:37 /dev/zvol/Performance/ubuntu1 -> ../../zd0

# cat /etc/iet/ietd.conf
Target iqn.2014-10.com.motel.notell:ubuntu1
IncomingUser proxmox dievmwaredie!
OutgoingUser
Lun 0 Path=/dev/zvol/Performance/ubuntu1,Type=fileio
Alias ubuntu1

Target iqn.2014-10.com.motel.notell:pvol-test
IncomingUser proxmox dievmwaredie!
OutgoingUser
Lun 1 Path=/dev/zvol/Performance/pvol-test,Type=fileio
Alias pvol-test



And then like curlingcanteen, using the gui, I added the iscsi targets in my proxmox 4 node cluster. Per the guideline, I did not check the "Use LUNs directly" box.

But then I get nowhere trying to use the gui to "Add: LVM".
 
I'm sorry I can't help you but thank you so much for sharing your experience! I was starting to seriously question myself with all the trouble I've been having.

Sent from my HTC Glacier using Tapatalk 2
 
Oh, and I love your username. Do you use that everywhere or is it a happy result of a username generator?

Sent from my HTC Glacier using Tapatalk 2
 
Oh, and I love your username. Do you use that everywhere or is it a happy result of a username generator?

Sent from my HTC Glacier using Tapatalk 2


Actually I use "rustybutt" most everywhere. It's a moniker a friend gave me years ago as my human name is Russ Button. Never thought it would actually be anything anyone liked.

I've gotta find a way to get past this problem or proxmox will never get a start at my company. I'd like to think it's a real VMware killer, but right now it's just a sick dog waiting to die.
 
I've pretty much got the same siuation as curlingcanteen. My iscsi server is running ubuntu 1404, running open-iscsi. Locally on the box, we have a zfs store on which I've created a couple of raw block devices and have labeled them as iscsi targets.

On the iscsi server:

# zfs create -V 10G Performance/ubuntu1
# zfs create -V 200G Performance/pvol-test


# ls -l /dev/zvol/Performance/pvol-test
lrwxrwxrwx 1 root root 10 Oct 17 22:37 /dev/zvol/Performance/pvol-test -> ../../zd16
t# ls -l /dev/zvol/Performance/ubuntu1
lrwxrwxrwx 1 root root 9 Oct 17 22:37 /dev/zvol/Performance/ubuntu1 -> ../../zd0

# cat /etc/iet/ietd.conf
Target iqn.2014-10.com.motel.notell:ubuntu1
IncomingUser proxmox dievmwaredie!
OutgoingUser
Lun 0 Path=/dev/zvol/Performance/ubuntu1,Type=fileio
Alias ubuntu1

Target iqn.2014-10.com.motel.notell:pvol-test
IncomingUser proxmox dievmwaredie!
OutgoingUser
Lun 1 Path=/dev/zvol/Performance/pvol-test,Type=fileio
Alias pvol-test



And then like curlingcanteen, using the gui, I added the iscsi targets in my proxmox 4 node cluster. Per the guideline, I did not check the "Use LUNs directly" box.

But then I get nowhere trying to use the gui to "Add: LVM".
If you use ZFS why not use the built-in ZFS support from Proxmox?
See: http://pve.proxmox.com/wiki/Storage:_ZFS
 
I'm fairly new to this topic, but as far as I understood, you don't really need
to drop to commandline if you want to use your iSCSI target as a block storage
(only for vm).

iSCSI to Proxmox:
Dropdown Menu (Storage Tab, Add Button) Add iSCSI.
Chose portal, target, no node restriction, check enable, uncheck use lun directly

iSCSI to LVM:
Same Dropdown Menu, Add LVM
Chose the previosly added iSCSI target as Base Storage, the LUN, name the vg, no nodes restriction, enable, shared

Now you should be able to create VM's on that Storage without further modifying it (no formating).
Keep in mind thought it'll only work for VM's, not for CT's.
Also keep in mind the disk format of the VM's will be raw.


As far as I understood, the initial question was how to use iSCSI and LVM to store a CT? I get the
problem with the block device vs. file storage, the suggested solution to this was performing the
above mentioned steps minus the vm creation, instead creating a lv on top of the vg, mounting
it somewhere (with /etc/fstab) and formating it with a filesystem.
Then adding it as a directory from the same dropdown menu used to add iSCSI and LVM. How ever
I don't know whether this is HA usable or not, after all it is mounted to a single node and then
redistributed to the cluster. Also I have yet to test this configuration.
 
I'm fairly new to this topic, but as far as I understood, you don't really need
to drop to commandline if you want to use your iSCSI target as a block storage
(only for vm).

iSCSI to Proxmox:
Dropdown Menu (Storage Tab, Add Button) Add iSCSI.
Chose portal, target, no node restriction, check enable, uncheck use lun directly

iSCSI to LVM:
Same Dropdown Menu, Add LVM
Chose the previosly added iSCSI target as Base Storage, the LUN, name the vg, no nodes restriction, enable, shared

Now you should be able to create VM's on that Storage without further modifying it (no formating).
Keep in mind thought it'll only work for VM's, not for CT's.
Also keep in mind the disk format of the VM's will be raw.

This worked! I think I might have had to reboot all nodes after adding the iSCSI device.

As far as I understood, the initial question was how to use iSCSI and LVM to store a CT?
It was, but that was my first bit of ignornace. It was quickly explained to me that I'd need NFS for OpenVS containers.

I get the
problem with the block device vs. file storage, the suggested solution to this was performing the
above mentioned steps minus the vm creation, instead creating a lv on top of the vg, mounting
it somewhere (with /etc/fstab) and formating it with a filesystem.
Then adding it as a directory from the same dropdown menu used to add iSCSI and LVM. How ever
I don't know whether this is HA usable or not, after all it is mounted to a single node and then
redistributed to the cluster. Also I have yet to test this configuration.
Right, but this is a HA setup so I'll leave testing that experiment to others! :)

Thanks for breaking it down, Thorn!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!