pve-data not mounted after install

MarkusMcNugen

New Member
Mar 13, 2017
6
2
3
35
So I seem to have an interesting problem. I'm not a linux guru but I've done my fair share of working on different linux machines and distributions. I installed Proxmox 4.4 on a server with two 480GB SSDs. I could not for the life of me get ZFS raid to work using the ISO. It kicked me back into an EFI shell or couldnt find a boot disk whenever I tried to use ZFS.

So I decided to install onto only one SSD using XFS. The issue is that it's showing only 96GB of available disk space. I ran lvdisplay to see where /dev/pve/data was mounted and found it has no mount point. Instead my proxmox install is only using the root lvm partition for local storage instead of data.

Since the logical volume exists, I thought I would try mounting it with fstab but that failed with a 1m 30s timeout and booting me into emergency mode. My fstab entry was "/dev/pve/data /var/lib/vz xfs defaults 0 2"

Relevant commands and output are below. Any help the community could provide would be appreciated. Thank you!

VGDisplay
Code:
root@proxmox2:~# vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  7
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               465.51 GiB
  PE Size               4.00 MiB
  Total PE              119170
  Alloc PE / Size       115119 / 449.68 GiB
  Free  PE / Size       4051 / 15.82 GiB
  VG UUID               FGiIrI-CDQU-Mm53-Xpmm-gcN7-d1e9-PUgf9r

LVDisplay
Code:
root@proxmox2:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                QAd8vV-KYc2-g5vX-EomM-DrGI-MqKQ-vla4Hk
  LV Write Access        read/write
  LV Creation host, time proxmox, 2017-04-27 15:47:51 -0400
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:1

  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                DKwgBZ-vTtg-4al0-zjFo-rr8n-WueJ-U5ymtt
  LV Write Access        read/write
  LV Creation host, time proxmox, 2017-04-27 15:47:52 -0400
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:0

  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                uvg1GY-onzH-VPC9-sNkT-vCWw-vPyv-Sp0dyh
  LV Write Access        read/write
  LV Creation host, time proxmox, 2017-04-27 15:47:52 -0400
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                345.51 GiB
  Allocated pool data    0.00%
  Allocated metadata     0.43%
  Current LE             88451
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:4

FSTab
Code:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / xfs defaults 0 1
UUID=AE8A-74AB /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
 
Last edited:
df command
Code:
root@proxmox2:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             10M     0   10M   0% /dev
tmpfs           6.3G  9.1M  6.3G   1% /run
/dev/dm-0        96G  6.1G   90G   7% /
tmpfs            16G   54M   16G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            16G     0   16G   0% /sys/fs/cgroup
/dev/sda2       253M  296K  252M   1% /boot/efi
/dev/fuse        30M   28K   30M   1% /etc/pve

lsblk command
Code:
root@proxmox2:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 465.8G  0 disk
├─sda1               8:1    0     1M  0 part
├─sda2               8:2    0   256M  0 part /boot/efi
└─sda3               8:3    0 465.5G  0 part
  ├─pve-root       251:0    0    96G  0 lvm  /
  ├─pve-swap       251:1    0     8G  0 lvm  [SWAP]
  ├─pve-data_tmeta 251:2    0    88M  0 lvm
  │ └─pve-data     251:4    0 345.5G  0 lvm
  └─pve-data_tdata 251:3    0 345.5G  0 lvm
       └─pve-data     251:4    0 345.5G  0 lvm
sdb                  8:16   0 465.8G  0 disk
└─sdb1               8:17   0 465.8G  0 part
loop0                7:0    0   100G  0 loop
 
Last edited:
1. efi and zfs do not work together (there is currently no sane way to use an efi partition in a zraid1 or raidzX)

2. in the default variant, the lv data in the vg pve is not a normal lv, but a lvm thin pool (block storage) used by the storage "local-lvm" so it does not show up with df or mount but
you can see the free size with "lvs"
 
1. efi and zfs do not work together (there is currently no sane way to use an efi partition in a zraid1 or raidzX)

2. in the default variant, the lv data in the vg pve is not a normal lv, but a lvm thin pool (block storage) used by the storage "local-lvm" so it does not show up with df or mount but
you can see the free size with "lvs"

Thanks dcsapak, that at least explains why my ZFS raid wasn't working the first time. I did try disabling EFI and using legacy boot from the BIOS but that also failed to boot, which is still a question mark.

The 345GB lvm-thin partition was nowhere to be found in Proxmox. Is that normal behavior for a Proxmox 4.4 ISO install using XFS? Or perhaps when I joined the node to our cluster it screwed something up with the storage mounts?

I was able to get the lvm-thin partition mounted in the node on our cluster. Thanks for your help! I guess I should have read the LVM2 part of the wiki. I hadn't realized the changes between 4.2 and 4.4 in regards to lvm and lvm-thin and /var/lib/vz.
 
The 345GB lvm-thin partition was nowhere to be found in Proxmox. Is that normal behavior for a Proxmox 4.4 ISO install using XFS? Or perhaps when I joined the node to our cluster it screwed something up with the storage mounts?
when you join a node to a cluster, it gets the storage config from the cluster (discarding any storages only locally defined). so i guess this is what happened
 
Yes I had an older version of Proxmox which I upgraded to 4.4 then a fresh installed 4.4 server then created cluster. this might be why data partition is not used. How can I use it?
 
Edit your /etc/pve/storage.cfg file and add this to your storage.cfg:

Code:
lvmthin: pve-data
        thinpool data
        vgname pve
        content images,rootdir

You may also want to specify the LVM storage to only a specific node by specifying the name of the nodes:

Code:
lvmthin: pve-data
        thinpool data
        vgname pve
        content images,rootdir
        nodes proxmox1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!