/dev/pve/data/ not mounted after fresh 4.2 install

the4ndy

New Member
May 3, 2016
5
0
1
39
So I have been using Proxmox at work and at home for several years now and admittedly I do not have the strongest grasp on many features of Proxmox, I usually can get by alright. These past few days...not so much.

I have installed a fresh install of Proxmox 4.2 (4.2-2/725d76f0) on my server (which was previously running 4.1 no issues) and it would seem the /dev/pve/data drive that to my knowledge normally mounts to /var/lib/vz is not in /etc/fstab and will not mount (it appears to not even have a file system based on the ouput of an fsck /dev/pve/data). Admittedly this is where I am not to experienced.

I have tried the following things:
- Most of the advice here: https://forum.proxmox.com/threads/wont-boot.5908/
- Most of the advice here: https://forum.proxmox.com/threads/not-all-disk-space-available-after-installation.10510/
- I have used this guide before to great success when adding new drives to a node, and I tried to use some commands here as guidance (mainly to help add a /dev/pve/data line to fstab) https://pve.proxmox.com/wiki/Extending_Local_Container_Storage
- I read through this one but it did not pertain as I saw it much to my situation: https://forum.proxmox.com/threads/added-5tb-drive-only-showing-as-500gb.20347/
- I had this same issue after at least 3 reinstalls using a fresh ISO from the website and on one install after rebooting with a line in /etc/fstab mounting /dev/pve/data I would get thrown to maintenance mode and this guide was similar to that issue, but ultimately also did not lead me to the solution: https://forum.proxmox.com/threads/wont-boot.5908/

Anyway, the end result of these issues are that only 90 ish GB of my 550 ish GB are available for use by the node.


Any help that I can get would be greatly appreciated. I do not have a subscription so whatever help is given I will be forever grateful. I am considering reinstalling Proxmox 4.0 or 4.1 and then just doing the no-subscription upgrade as I did with my other node (that is not having any issues currently) and as I did with the node in question prior to having to reinstall due to hardware maintenance. Thanks again in advance for the help, I am so very thankful for Proxmox and the community !!!
 
Last edited:
We now use lvm-thin on /dev/pve/data. That storage is available as 'local-lvm', and you can use it to store VM/CT data. But it is a block storage, so it is not mounted at all.
 
hmmmm.....so then in the gui when it says I have only 90 ish GB of data....is that just not accurate, or... sorry for not understanding, how do i get the rest of my data accessible? as of now the storage "local" points to /var/lib/vz
 
hmmmm.....so then in the gui when it says I have only 90 ish GB of data....is that just not accurate, or...

Depends where you look in the GUI. If you look at the local-lvm storage, there should be more space.

sorry for not understanding, how do i get the rest of my data accessible? as of now the storage "local" points to /var/lib/vz

What do you mean by accessible? 'local' is storing iso images, lxc templates and backups by default.
 
By accessible i mean that in total i have about 650GB of space and 90GB ish of it is showing as available to store stuff on in 'local' (which points to /var/lib/vz). However there remaining 550GB ish is nowhere to be found in the GUI and i am not sure how to make it available to use as storage for the actual LXCs and VMs. Normally in past versions of proxmox I would make a folder in /media/ called backups, templates, isos, etc and store those items on the 90GB ish that is available on root and since pve-data is mapped to /var/lib/vz and usually is the remaining chunk of my data, I use that to store the VMs on.

I guess I am not doing things properly, but at this point I am missing about 550GB ish of space. Currently the GUI says that local has 90 ish GB of space and I cannot restore my VMs to the node using the 'local' storage because there is not enough space on it (exit code 133) indicating that the 90GB available space displayed by the GUI is somewhat accurate.

I really do apologize for my inabilities and I cannot thank you guys/girls enough for the help you provide. Thanks so much!
 
there is (well now was) no local-lvm storage option (this may be because I added it to a cluster which did not have that storage location but i am not sure).

I apologize for the delay, I went off the grid camping for a few days but I am back now and I fixed my problem, though I am still not clear on what went wrong. My solution was simply to install the node via a 4.0 ISO and then upgrade with the no-subscription repo to 4.2 and everything is back to normal :D

Thanks again for everyone's help.
 
Yes, that explains the behaviour.

I ran in the same problem here... fresh installation of 4.2, and added to existing cluster.
There is no "local-lvm" in the GUI, although both servers are 4.2

New server has a lightning-fast pcie-SSD with 512GB, and I can't use it :(

How is this to be solved?
 
you can simply add the storage again? either in the GUI or manually in /etc/pve/storage.cfg
 
I ran in the same problem here... fresh installation of 4.2, and added to existing cluster.
There is no "local-lvm" in the GUI, although both servers are 4.2

New server has a lightning-fast pcie-SSD with 512GB, and I can't use it :(

How is this to be solved?


You're not going to like my reply so read through what everyone else explained first, to my understanding it says that they changed the way storage works and we should be looking for something different in 4.2....idk

Anyway. The solution I had was to install the mode as 4.0, then manually upgrade to 4.2. not the easiest solution,but it works.
 
I see.
My "master" node was a machine I once installed with 4.0, and consecutively patched up to 4.2
So it never had "thin LVM", although it was a 4.2.
Then I installed the second node from a fresh 4.2 ISO - this one had "thin LVM". But I immediately added it to the cluster, and so the "thin LVM" information vanished? Is this a bug in the cluster-software?

Anyway..
the steps are to remove the new node from the cluster, reinstall (different name?), restore "old /var/lib/vz layout", and then re-add to the cluster?
 
Yes, that'll work, yet I think there is simpler solution than reinstalling. Almost everything in Linux can be changed without completely destroying everything.

Could you please cat your storage.cfg from each node? Then we can see what is going on there.
 
I see.
My "master" node was a machine I once installed with 4.0, and consecutively patched up to 4.2
So it never had "thin LVM", although it was a 4.2.
Then I installed the second node from a fresh 4.2 ISO - this one had "thin LVM". But I immediately added it to the cluster, and so the "thin LVM" information vanished? Is this a bug in the cluster-software?

That is not a bug, but a design limitation. The storage definitions in /etc/pve/storage.cfg are shared cluster-wide, and if you join an existing cluster your local configuration is overwritten with the one from the cluster. You basically have two options:
  • re-add the LVM-thin pool on the newly joined cluster to the storage definitions, but limit access to that node (this can simply be done in the GUI)
  • delete the LVM-thin pool and recreate the old-style setup with a "fat" LV on /var/lib/vz
No need to reinstall the node in either case!
 
OK, so as I do not have any (more a) template for this:
I do add in the GUI like that:
Datacenter/Storage/Add/LVM-Thin
What are the required values for "ID" and "Thin Pool"?
 
storage.cfg on both nodes is:

dir: local
path /var/lib/vz
maxfiles 0
content iso,rootdir,images,vztmpl

and lvdisplay on the new node gives:
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID T8WIXN-yaef-uMKW-fYwy-RArj-AORe-lbwsMZ
LV Write Access read/write
LV Creation host, time proxmox, 2016-06-30 06:50:59 +0200
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:1

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID TRQ8s1-QXcw-k5cg-jPTE-12e1-3DLC-Ua46u0
LV Write Access read/write
LV Creation host, time proxmox, 2016-06-30 06:50:59 +0200
LV Status available
# open 1
LV Size 96.00 GiB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:0

--- Logical volume ---
LV Name data
VG Name pve
LV UUID DGmlV4-AXfI-O3gk-yiDq-EQwL-FDOh-V8Chdg
LV Write Access read/write
LV Creation host, time proxmox, 2016-06-30 06:51:00 +0200
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 0
LV Size 356.82 GiB
Allocated pool data 0.00%
Allocated metadata 0.42%
Current LE 91345
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:4
 
Finally found the solution.
You cannot do it with GUI, because lvm-data won't show up.
So you have to edit the /etc/pve/storage.cfg file by hand and add these lines:

lvmthin: local-lvm
thinpool data
vgname pve
content images
#replace pvei5 with your node's name:
nodes pvei5


Thank you to everyone for your support.
 
You should be able to add it to the GUI without problems:
Datacenter -> Storage -> Add -> LVM-Thin
then pick an ID (e.g., local-lvm) and select first the VG then the thin pool (both provide a dropdown with the existing VGs and thin pools)
 
hello!
what i may mount /dev/pve/data for copy files to it from console/mc at proxmox >4.2?
i trying "mount /dev/mapper/pve-data /mnt/data -o rw,user"
and see:
"mount: wrong fs type, bad option, bad superblock on /dev/mapper/pve-data,
missing codepage or helper program, or other error

In some cases useful info is found in syslog - try
dmesg | tail or so."
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!