Can Virtual machines see SAN partitions via HBAs?

pfmasse60

New Member
Feb 16, 2010
10
0
1
I have a SAN which is connected to the Proxmox VE via fiber channel HBAs. The Proxmox VE can see the Logical Volumes just fine. I am not currently storing any KVMs on these Volumes. However, I would like to create virtual machines (OpenVZ or KVM) that would be able to see the same Logical Volumes as the Proxmox VE sees and mount them from within the Virtual Machine. Is this possible or are Virtual Machines only able to mount SAN partitions via TCP/IP from another machine?
 
Accessing LVs from the command line of the Proxmox VE its self is no problem. The questions is how to access those LVs from the command line of a virtual machine?
 
Accessing LVs from the command line of the Proxmox VE its self is no problem. The questions is how to access those LVs from the command line of a virtual machine?

Sorry - i do not understand what you mean. You already tried to use the SAN as LVM storage?
 
Hi,
i think you mean bind-mounts (openvz) or the direct access of one harddisk/lun (e.g. export /dev/sdh to vm XXX) in one VM.
You can find examples in the forum, but i find this way not very nice (but sometimes nessesary).

The "normal" way with a vg on a lun - and logical volumes on the vg which contains container for the vms is much easier. And you can use live migration.

Udo
 
Perhaps I don't fully understand how the storage option is used in Proxmox.

Here's an example of what I'm trying to do. Lets say I create a virtual machine (a container, OpenVZ or KVM) for a client. That client logs in and has their web site and email server hosting on their virtual machine and all is great. Then one day they decide they want to store some data. They could store their data on their virtual machine, however I only alocated 10 GB of disk space for them when I created it. However I do have a SAN connected to my Proxmox server which has many Terabytes of space and I decide I want to give my client access to a few gigabytes of that space from their container.

How can I do that?
 
Perhaps I don't fully understand how the storage option is used in Proxmox.

Here's an example of what I'm trying to do. Lets say I create a virtual machine (a container, OpenVZ or KVM) for a client. That client logs in and has their web site and email server hosting on their virtual machine and all is great. Then one day they decide they want to store some data. They could store their data on their virtual machine, however I only alocated 10 GB of disk space for them when I created it. However I do have a SAN connected to my Proxmox server which has many Terabytes of space and I decide I want to give my client access to a few gigabytes of that space from their container.

How can I do that?
Hi,
read this http://forum.proxmox.com/threads/3179-where-do-you-see-the-iscsi-attached-storage?p=17904
It's similiar to you question.

Udo
 
After doing some studying of all the material, let me see if I have this right:

1. In the storage module, add a LVM group.
2. After adding the LVM group, any existing Logical Volumes should be visible in the storage module. (I can see my existing Volume Group name in the drop down list)
3. At this point I should be able to mount any existing Logical Volumes.
4. These newly mounted Logical Volumes will also be available to my containers. (right?)

Question:
I have existing data on the Logical Volumes already created. Will this initial setup process make any changes to my Volume Group? i.e. Will it erase my existing data?
 
I believe I found the answer. I did a search for the bind mounts which you spoke of earlier in this thread and this will work for me.

#!/bin/bash

/bin/mount -n --bind /media/shared /var/lib/vz/root/${VEID}/media/shared

exit $?
 
Here's the conclusion of my quest. The following script works with a Fiber Channel mounted SAN with LVM.

#!/bin/bash
sleep 1
/bin/mount -n --bind /test /var/lib/vz/root/${VEID}/media/testmount
exit $?

The LVM needs to be mounted to a directory on the Proxmox VE first. This is because mount with the --bind option takes two directories as arguments and complains when the first argument (/test in this case) is a device.

Second, the script needs to sleep for an extra second to allow the mount point to be created, otherwise it tries to mount to a directory which does not yet exist. Using the command line vzctl start <VMID> to start and stop the container is a huge help for seeing problems immediately.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!