Mounting GPT disk in VM

jaimito

Member
Aug 30, 2010
22
0
21
I have seen the thread at:

http://forum.proxmox.com/threads/3589-Using-local-harddisk-with-existing-data-as-storage

but as this is old have started a new post.

I have a large (14TB) Raid 6 on Areca array, partitioned with cgdisk (but could have been parted) and set up as per general recommends for GPT disks with a dummy mbr and partition and spare space between each of 3 3.5 TiB partitions.

I want to mount each of these partitions as a physical data drive in each of three VMs (full not container, but in principle could be containers). This is because the machines will archive a lot of data and I am unhappy about 3 or 4 TB virtual filesystems for critical data :) I want to be able to easily move the physical data to another machine if necessary.

I can mount the GPT partitions without issue in the host, but when I try to do this in the guests I receive an error message as follows:

root@box1:~# mount -t ext4 /dev/sda3 /data
mount: special device /dev/sda3 does not exist

What am I doing wrong?

Thanks!
 
Hi,
how do you defined the disk inside the vm? I guess you use something like "ide1: /dev/sdb2 (or qm set ...)".
In this case the VM see an disk without mbr/partitiontable because it's allready an partition.
Look for the disks inside the VM (with "fdisk -l") and try to mount with "mount /dev/sdb /mnt" - should work.

BTW: If you use your Raid-area as lvm-storage, your vm-disks are as logical volume on the vg - very clean . I think most better than such an partition-construct (and much more flexible).

Udo
 
Thanks for the reply.

Hi,
how do you defined the disk inside the vm? I guess you use something like "ide1: /dev/sdb2 (or qm set ...)".
In this case the VM see an disk without mbr/partitiontable because it's allready an partition.
Look for the disks inside the VM (with "fdisk -l") and try to mount with "mount /dev/sdb /mnt" - should work.

This is where I am stuck; fdisk will not pick up GPT disks...

The Areca array appears as /dev/sda in the host. Here is what happens when I run parted:

Code:
(parted) print
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 34.4GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system     Flags
 1      1049kB  32.9GB  32.9GB  primary   ext3            boot
 2      32.9GB  34.4GB  1443MB  extended
 5      32.9GB  34.4GB  1443MB  logical   linux-swap(v1)

(parted) select /dev/sda
Error: Could not stat device /dev/sda - No such file or directory.        
Retry/Cancel?

It does not see the partitions...

I installed gptfdisk and this does not see the partitions either:

Code:
root@box1:~# gdisk /dev/sda
GPT fdisk (gdisk) version 0.8.1

Problem opening /dev/sda for reading! Error is 2.
The specified file does not exist!

Does Proxmox pass through the underlying GPT partitions to guests?

If not this may be an issue in the future...

Or (as I suspect) am I overlooking something?

:confused:

BTW: If you use your Raid-area as lvm-storage, your vm-disks are as logical volume on the vg - very clean . I think most better than such an partition-construct (and much more flexible).
Udo

:cool: Yes, I have done this usually, and it works very well, but lvm on gpt is something I have yet to get my head around! First I have to get to see the partitions...

j
 
Thanks for the reply.



This is where I am stuck; fdisk will not pick up GPT disks...

The Areca array appears as /dev/sda in the host. Here is what happens when I run parted:

Code:
(parted) print
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 34.4GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system     Flags
 1      1049kB  32.9GB  32.9GB  primary   ext3            boot
 2      32.9GB  34.4GB  1443MB  extended
 5      32.9GB  34.4GB  1443MB  logical   linux-swap(v1)

(parted) select /dev/sda
Error: Could not stat device /dev/sda - No such file or directory.        
Retry/Cancel?
Hi, you mixed host and vm?!
When your areca-volume is sda on the host, it's say nothing about the device in the VM. In you case you don't have an sda inside your guest!
Again my question of the first posting: How do you define the disks for the guest? Post the vm-config (below /etc/qemu-server ).
It does not see the partitions...

I installed gptfdisk and this does not see the partitions either:

Code:
root@box1:~# gdisk /dev/sda
GPT fdisk (gdisk) version 0.8.1

Problem opening /dev/sda for reading! Error is 2.
The specified file does not exist!

Does Proxmox pass through the underlying GPT partitions to guests?

If not this may be an issue in the future...

Or (as I suspect) am I overlooking something?

:confused:



:cool: Yes, I have done this usually, and it works very well, but lvm on gpt is something I have yet to get my head around! First I have to get to see the partitions...

j
kvm pass that to the guest, what you defined - if you pass the whole disk, the guest see the whole disk. If you pass only an partition, the guest see an disk, which is in real life a partition (don't have any partitiontable). If you create an partitiontable on such an disk, you have an partitiontable inside a partition - this is not very handy if you wan't to access the data from the host.
BTW. lvm on gpt-partitions works very well - why not?

Udo
 
Hi, you mixed host and vm?!
When your areca-volume is sda on the host, it's say nothing about the device in the VM. In you case you don't have an sda inside your guest!
Again my question of the first posting: How do you define the disks for the guest? Post the vm-config (below /etc/qemu-server ).
Thanks again for the reply.

Um, I don't think I am mixing them, but I am certainly misunderstanding!

My point is that the GPT disk is mountable on the host OK.

In the post I referred to above you said:

icon1.png
Re: Using local harddisk with existing data as storage


quote_icon.png
Originally Posted by janka
Hi

If I do this with an etx2 or ext3 partition, the partition shows up in the guest as an unformated drive, then I can fdisk, mkfs and mount in the guest.
But I want the existing data on the partition in the guest, and it sounds to me that is what is happening to others here? What do I do wrong?

(If I do "qm set <cmid> --ide1 /dev/sdc" it works, and I can mount sdc2 etc in the guest, but thats not what I want...)
jan

Hi Jan,
you export an partition, not a disk - so you don't have a partitiontable inside the guest!
But you can mount the filesystem:
Code:

fdisk -l /dev/sdb no partition table mount /dev/sdb /mnt ls /mnt <here is the content of the partition>


This is what I want to do as well, but I have not yet achieved this and cannot use fdisk as it does not recognise GPT disks.

The VM config is untouched, I have not attempted to mount the disk from there:

Code:
box1:/etc/qemu-server# cat 102.conf
name: s1.example.com
ide2: none,media=cdrom
vlan0: rtl8139=nn:38:00:nn:nn:nn
bootdisk: virtio0
virtio0: VirtualMachines:102/vm-102-disk-1.raw
ostype: l26
memory: 2048
sockets: 1

kvm pass that to the guest, what you defined - if you pass the whole disk, the guest see the whole disk. If you pass only an partition, the guest see an disk, which is in real life a partition (don't have any partitiontable). If you create an partitiontable on such an disk, you have an partitiontable inside a partition - this is not very handy if you wan't to access the data from the host.
BTW. lvm on gpt-partitions works very well - why not?
Udo

I have been using (with Proxmox and other Debian) LVM on top of raw disk <2TB, I don't see any need for a partition. However I am trying to get to some understanding of the best way of achieving my objectives here. Running LVM on a GPT partition on RAID6 seems redundant.

Of course, I could instead of applying the partitions to the large array create three smaller arrays each of which could be presented as a single disk, and that raw disk could then easily have LVM on it...

My apologies for not 'getting' this...

TIA!
 
Thanks again for the reply.

Um, I don't think I am mixing them, but I am certainly misunderstanding!

My point is that the GPT disk is mountable on the host OK.

In the post I referred to above you said:




This is what I want to do as well, but I have not yet achieved this and cannot use fdisk as it does not recognise GPT disks.

The VM config is untouched, I have not attempted to mount the disk from there:

Code:
box1:/etc/qemu-server# cat 102.conf
name: s1.example.com
ide2: none,media=cdrom
vlan0: rtl8139=nn:38:00:nn:nn:nn
bootdisk: virtio0
virtio0: VirtualMachines:102/vm-102-disk-1.raw
ostype: l26
memory: 2048
sockets: 1
we are move in a circle... how do you want to mount a partition inside a vm without "say" the vm which partition/disk is to use?? Magic?
Read the posting again and just try.

I have been using (with Proxmox and other Debian) LVM on top of raw disk <2TB, I don't see any need for a partition. However I am trying to get to some understanding of the best way of achieving my objectives here. Running LVM on a GPT partition on RAID6 seems redundant.
A partition is not nesessary, but makes sense (if you use fdisk and an partition type 8e you will remeber that this is an lvm-partition).
What do you mean with LVM/GPT/RAID6 and redundant?? lvm isn't raid and a partition-table is the normal way. E.G. i have on one system following setup:
Huge raid on two nodes, gpt-table with two partitions, two drbd-resources (network-raid1) on both partitions, lvm-storage on both drbd-resources - each lvm-storage is mainly used by one node.
I can say, raid, partition and lvm makes sense (with or without drbd).
Of course, I could instead of applying the partitions to the large array create three smaller arrays each of which could be presented as a single disk, and that raw disk could then easily have LVM on it...

My apologies for not 'getting' this...

TIA!
like wrote above - play a little bit around (also with lvm-storage). Esp. with lvm-storage you can simply expand a VM-disk... with raid-volumes, or partitions it's much harder or impossible.

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!