data logical volume not mounted in fresh Proxmox 5.1 installation

zenny

Renowned Member
Jul 7, 2008
93
3
73
Hi,

I cannot say for sure whether this is a feature or a bug?

Installed Proxmox 5.1 on a KVM machine with 100GB space allocated, but 'df -h' failed to show up pve-data mounted.

The only error message I see:
<QUOTE>
# dmesg | grep vd
[ 0.921700] vda: vda1 vda2 vda3
# dmesg | grep sd
[ 2.568743] device-mapper: thin: Data device (dm-3) discard unsupported: Disabling discard passdown.
</QUOTE>

Details of the installation follows:
<QUOTE>
# df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 0 7.9G 0% /dev
tmpfs 1.6G 8.8M 1.6G 1% /run
/dev/mapper/pve-root 16G 1.5G 14G 10% /
tmpfs 7.9G 25M 7.9G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/fuse 30M 16K 30M 1% /etc/pve
tmpfs 1.6G 0 1.6G 0% /run/user/0


# pvdisplay
--- Physical volume ---
PV Name /dev/vda3
VG Name pve
PV Size 99.75 GiB / not usable 1.98 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 25535
Free PE 3135
Allocated PE 22400
PV UUID jU5rED-59Il-T33F-H4zD-mG2k-hJG0-nOcQ9Y

# pvscan
PV /dev/vda3 VG pve lvm2 [99.75 GiB / 12.25 GiB free]
Total: 1 [99.75 GiB] / in use: 1 [99.75 GiB] / in no VG: 0 [0 ]

# lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID aazU5c-cGhA-7yEQ-L1ds-WHJP-GJGo-mETFSc
LV Write Access read/write
LV Creation host, time proxmox, 2017-12-30 08:38:12 +0100
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID 3EJCEY-qXCR-87DA-XeKZ-HdDJ-Fe4s-GsAD9H
LV Write Access read/write
LV Creation host, time proxmox, 2017-12-30 08:38:12 +0100
LV Status available
# open 1
LV Size 16.00 GiB
Current LE 4096
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

--- Logical volume ---
LV Name data
VG Name pve
LV UUID OM45pr-AyGl-2SLo-HWJa-atQi-4jJc-BFEY4H
LV Write Access read/write
LV Creation host, time proxmox, 2017-12-30 08:38:13 +0100
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 0
LV Size 63.38 GiB
Allocated pool data 0.00%
Allocated metadata 0.45%
Current LE 16224
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4


# fdisk -l
Disk /dev/vda: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 2B158AD4-FC2C-4A4B-95F8-F604475A5384

Device Start End Sectors Size Type
/dev/vda1 2048 4095 2048 1M BIOS boot
/dev/vda2 4096 528383 524288 256M EFI System
/dev/vda3 528384 209715166 209186783 99.8G Linux LVM


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-root: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
</QUOTE>

Cheers,
/z
 
It is a feature - pve-data is used to allocate lvm volumes for VM disk images, so there is no need to mount that.

@dietmar Thanks for an update. I was just reading your reply at https://forum.proxmox.com/threads/d...ed-after-fresh-4-2-install.27220/#post-136966.

1§ CLI issue: However, it makes very inconvenient to figure out data usage for the command line users which is the standard way to administer any *nix systems if GUI crashes or got into some bugs, fyi.

2§ GUI issue: pve-data (local-lvm) storage seems to have been locked to disk images and containers only, but not VZDump backups. I deselected VZdump backups from local storage and tried to get it mounted to local-lvm (pve-data), but there was no relevant option displayed besides the default two.

Happy New Year 2018.

Cheers,
/z
 
@dietmar
1§ CLI issue: However, it makes very inconvenient to figure out data usage for the command line users which is the standard way to administer any *nix systems if GUI crashes or got into some bugs, fyi.

Sorry, but I think lvm tools are really standard in 2017 ...

Besides, we provide CLI tools (pvesm) to get that information, and a verbose API.
 
Hi!

Is there a way to reduce/shrink such type of pve-data (tdata/tmeta) volume?
Because I need some more space for system /dev/pve/root.

Currently, my host has /dev/pve/root = 8GiB and pve/data = 10TiB (used only 12% of it).
I have 10 more working VMs and need some space for backups.
I would like to get for example 1TiB from pve/data back to root.
There is no other free space left except that "big fat" pve/data.
So, how can I repartition or any ways to recover some free space?
Is that possible?