HowTo use separate external partitions inside VM (KVM)

ACE_Map

New Member
Dec 31, 2009
6
0
1
Hi,

I'm using KVM for my virtual machines and I want to use some extra partitions from an available RAID-array inside of the VM's.

For example:
Host-System running proxmox resides on /dev/sda.

Host-System has also RAID-array on /dev/sdb[1,2,3,4] and I want to use /dev/sdb1 inside of my VM's. /dev/sdb1 is formatted as ext3 and holds data that should be accessible from different VM's and (maybe) from outside, too.

/dev/sdb1 therefore cannot be used as VM-storage to place the VM itself on it. It is simply a data-container that should be accessible from multiple VM's.

Any ideas how this could be managed?
 
Thanks.

Host-System:
Code:
ide0: VMs:102/vm-102-disk-1.qcow2
ide1: /dev/mapper/diskvg-test

But unfortunately this will provide the given partition as complete Harddisk. Then I can partition this harddisk again. But I only wanted to mount/use the partition itself without modifying it in the first step.
 
But unfortunately this will provide the given partition as complete Harddisk. Then I can partition this harddisk again.

Why is that a problem?

But I only wanted to mount/use the partition itself without modifying it in the first step.

Simply mount it - what is the problem?
 
Hi,

I'm using KVM for my virtual machines and I want to use some extra partitions from an available RAID-array inside of the VM's.

For example:
Host-System running proxmox resides on /dev/sda.

Host-System has also RAID-array on /dev/sdb[1,2,3,4] and I want to use /dev/sdb1 inside of my VM's. /dev/sdb1 is formatted as ext3 and holds data that should be accessible from different VM's and (maybe) from outside, too.

/dev/sdb1 therefore cannot be used as VM-storage to place the VM itself on it. It is simply a data-container that should be accessible from multiple VM's.

Any ideas how this could be managed?

mounting ext3 partitions from multiple guest can cause data corruption. why don´t you use nfs for example?
 
Why is that a problem?

Simply mount it - what is the problem?

Okay, let's assume I have a valid partition accessible on the PROXMOX host-system as /dev/mapper/diskvg-test (10 GB, ext3 formatted, containing test-data).

If I use this partition with IDE-command in the conf-file, the VM will provide me this ext3-partition not as partition but as harddisk. The fdisk-command inside the VM complains that there are no valid partitions inside this harddisk.

If I partition this "harddisk" inside the VM I will corrupt the already available valid data on the underlying /dev/mapper/diskvg-test, doesn't I?
 
Oups, okay forget it.

I have just tried to mount the complete new harddisk and what should I say: it succeeds!

Thank you for your patience.
 
mounting ext3 partitions from multiple guest can cause data corruption. why don´t you use nfs for example?
Let the Proxmox Host run a nfs-server? And then exporting the directories? Might be some performance issue compared to direct access of the partition? Or, what I don't know, how this mapping of IDE-devices is made inside of the host-system with KVM. Perhaps this is also made via the IP-stack.
 
Let the Proxmox Host run a nfs-server? And then exporting the directories? Might be some performance issue compared to direct access of the partition? Or, what I don't know, how this mapping of IDE-devices is made inside of the host-system with KVM. Perhaps this is also made via the IP-stack.

Proxmox VE is a virtualization host. as it is based on Debian Linux, you can install a NFS server but this not recommended.

if you need shared storage, go for a SAN/NAS on an extra box with full manageability (e.g. Openfiler).
 
You can't access an ext3 partition from multiple guest - that will not work and you will end up with a damaged filesystem.
Yes, you're right. If multiple systems will have write-privileges there is no mutual exclusion available. So, I will end up with only one VM be granted access to a dedicated partition.