Error 500: cant upload to storage type 'lvm'

longhair

Member
Jun 27, 2014
113
0
16
I want to use an internal hard drive as storage instead of using "local". I followed the instructions from the wiki ( https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Local_Backing ) and as soon as a file gets uploaded, I get Error 500: cant upload to storage type 'lvm'.

Here is my fdisk -l:

Code:
Disk /dev/sda: 160.0 GB, 160000000000 bytes
255 heads, 63 sectors/track, 19452 cylinders, total 312500000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1   312499999   156249999+  ee  GPT

Disk /dev/sdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1              63   312581807   156290872+  83  Linux

Disk /dev/mapper/pve-root: 40.0 GB, 39996882944 bytes
255 heads, 63 sectors/track, 4862 cylinders, total 78118912 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/pve-root doesn't contain a valid partition table

Disk /dev/mapper/pve-swap: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders, total 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/pve-swap doesn't contain a valid partition table

Disk /dev/mapper/pve-data: 96.9 GB, 96917782528 bytes
255 heads, 63 sectors/track, 11782 cylinders, total 189292544 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/pve-data doesn't contain a valid partition table
 
Thanks.

I don't fully understand the wiki - new to linux in general - so I have a couple of questions about it.

Give the 'Portal' IP address or servername and scan for unused targets - Would the servername be the same as the node name since the drive is in the same box?

It requires something in the Target: field. What do I fill in since nothing shows from the drop down box?
 
you should don't need to set IP or servername when choosing "local" LVM: "local" means that since the storage is created cluster-wise, each node will get this storage with its local backing. You need IP or hostname only when is network-backed (via iSCSI). So:

1) choose a storage name (eg: "lvmlocal")
2) choose "existing volume groups"
3) the "volume group" listbox has a list of the local LVM groups: choose the VG you created on each node for this purpose (they must be named identically and it is better if they are also sized identically, imho)

access to pve console (shell) and look the result of

#vgs

it should list all your LVM VGs:find the right one and use in step 3)

Marco
 
I thought I needed to create iSCSI first?

If I follow your last post and create LVM, then I have the same problem as in my first post.
 
iSCSI is for network based storage. You connect it with iSCSI, use it with LVM.
"local LVM" can be used without iSCSI, you just need a local VG.

please post ouput of

#pvs
#vgs
#lvs

from the pve node CLI where you wish to use that internal hard drive

Marco
 
If I follow your last post and create LVM, then I have the same problem as in my first post.

just to be clear, you can't CREATE local LVM from the gui. You can USE EXISTING local LVM from the GUI. You have to setup LVM from CLI, and create a VG. Then you use that VG from web GUI as "local LVM".

if you want to use LVM over iSCSI (storage on other network host, NAS etc) then you have to
1) create an iSCSI pve storage from web gui
2) create an LVM over iSCSI pve storage using the pve iSCSI storage from web gui

if you want to use LVM over local storage (same pve host, not other network nas/host) then you have to
1) from CLI use LVM commands to create VGs (since pve already uses LVM on its own, you will add your local disk as another PV and then create a dedicated VG to use as below)
2) create an LVM pve LOCAL storage from web gui just specifying the LOCAL VG to use for that.

at least, afaik. I just use iSCSI for my LVM pve storage. But that is what wiki seems to suggest :)

you could also, instead of using LVM at all, create a local storage, after mounting your disk device in a folder, eg /mnt/localstorage then in pve web gui use that folder as local storage. Since is not LVM you can use it for any purpose, not just vm RAW disks.

Marco
 
These are the steps from the wiki I followed. The first two steps, I logged in through Putty as root.

First create the physical volume (pv):

Code:
proxmox-ve:~# pvcreate /dev/sdb1
  Physical volume "/dev/sdb1" successfully created

Code:
proxmox-ve:~# Second, create a volume group (vg): 
 proxmox-ve:~# vgcreate usb-stick /dev/sdb1
  Volume group "usb-stick" successfully created
proxmox-ve:~#

And finally: Add the LVM Group to the storage list via the web interface:

Code:
"Storage name: usb", "Base storage: Existing volume groups", "Volume Group Name: usb-stick"
 
longhair, if you want a lvm local storage, then you only can have virtual disks for your VMs/CTs, and not as for storage of backups, ISOs, etc., if you are sure that this is that you want, follow these steps by CLI:

1- create a partition in the new disk, for example if is for sdb:
shell> cgdisk /dev/sdb
#And create a partion of type LVM

2- Create a Physical Volume:
shell> pvcreate /dev/sdb

3- Create a new Volume Group (VG) with the name that you want, but not "swap", "root" or "pve":
shell> vgcreate my-new-VG /dev/sdb

4- Confirm that the new VG was created (you must see the name of your new VG):
shell> vgscan
shell> vgs

5- In the GUI of PVE: datacenter, storage, add, LVM
#then complete the fields...
ID: A name that you want put (dog, cat, my-new-VG, etc.)
Volume Group: (here you should select the VG that was created)
Nodes: (Here put only the name of PVE host that have the HDD)
Enabled: (yes)
Shared: (no)

Finally, if are sure, click of mouse on the button [Add]

6- Since now, you will can create VMs and assign his virtual disks in this partition.
 
I want to be able to store everything, not only VMs, on the 2nd hard drive.
Then don't use LVM. Use a file system (comment: LVM as block device is much quick that a file system):

With this example, you will can not to do live-backups of CTs, only of VMs, if you want to do live-backups of CTs and VMs, you must use LVM as file system and with a good space free in the VG without use (but this setup will be more complicated of do)

For create a file system and use it:
1- Create a Linux partition in the new disk
2- Formatting this new disk with ext3 (better performance in PVE in comparation with other file systems)
3- With extreme care, edit the /etc/fstab, for that in each boot, your host can recognize the new drive and mount it (see in Internet or linux manual if you need help)
Only as example i did this in a host:
shell> mkdir /mnt/extra-disk
shell> nano /etc/fstab
#then add this line to the end of the file, but each UUID is unique in the world, do not copy this line as is here:
UUID=9a9f28fc-d280-49cb-b86b-7476c8ce9c8e /mnt/extra-disk ext3 defaults 0 2

4- Reboot
5- By CLI confirm that the new volume was mounted:
shell> df -h
6- In the PVE GUI add a Directory:
ID: assign the name that you want (dog, cat, etc.)
Directory: don't forget that the new file system is over a point of mount selected in the fstab file, and must coincide this directory or path in this field.
Content: (may that you want select all items)
Nodes: (only the node that have this new disk)
Enabled: (yes)
Shared: (no)
Max Backup: (choose a number, will be the max number of backups that this storage can have per each VM/CT)
7- Since now you can create new VMs/CTs where his virtual disks can be in this new storage called dog, cat, etc.. Also ISOs, templates, etc.
8- Enjoy of your new configuration in Proxmox
 
Last edited:
Thanks for you spending the time to make clear instructions to someone who knows Linux.

Unfortunately, I realize that I do not know the first thing about Linux to do something as mind numbing as creating a new partition. Even after searching google and reading manuals, the terminology used is meant for someone who already knows how to do everything, not a complete beginner like myself.
 
if you're really new to linux, you should better start with basic things, and lvm is not the simplest one to understand perhaps :) you could (eg) simply mount the disk as a folder, and use that folder as simple "local" storage.

but in all cases, you need to create a partition on the disk to use it: if you go LVM, an LVM partition type. If you go "Local", you must choose a type (cesarpk suggested ext3).

* LVM is somewhat simpler but has storage type limitations, as you already know :)
basically you have to
1) create a lvm partition on the disk
2) create a VG
3) use that VG for LVM storage in pve gui

there are many examples around the web, even for beginners: you can find one that suits your level.
eg: http://www.rootusers.com/how-to-increase-the-size-of-a-linux-lvm-by-adding-a-new-disk/

* local is slighty more difficult (??) but allows you any storage type.
basically you have to
1) create a partition on the disk, eg: ext3
2) mount the partition under a local folder eg: mnt/extra-disk
3) use that folder as LOCAL storage in pve gui
the "difficult part is about mounts and UUIDs if you don't know waht they are.
in linux you can mount any device as a "folder" (more or less as usb drives in windows, etc)
in pve from command line syntax is simple, eg (as root):
#mount /dev/devicename /folder/name - o <options>

but
1) this will not survive a reboot! you have to add a line in fstab do make it survive a reboot
2) if you wish to make abslutely sure that at reboot fstab mounts the right drive, the simple device name of /dev/sdb is not sufficient, as it depends on how pve (kernel & C) will name those disk, a new kernel or other could use another scan order at boot, and /dev/sdb is not what you think. so there is a UUID (universal univoque ID, a large number like 9a9f28fc-d280-49cb-b86b-7476c8ce9c8e), if you mount with UUID nothing can be mounted wrong, even if scan order is random! :)
try on the pve console
#ls -l /dev/disk/by-uuid/

and you will find what uuid is /dev/sdb now. note the right uuid, then use the syntax cesarpk provided... ie:
UUID=<your uuid> /mnt/extra-disk ext3 defaults 0 2

As always if unsure, experiment with a not-critical machine first.

Marco
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!