4.2: "mount /dev/mapper/data" not possible any more?

Dec 19, 2012
494
14
83
Hi.
Today i installed a fresh proxmox-4.2 Server and wanted to copy an existing VM-backup to this machine.
It has 1 TB of capacity ... so there should be more than enough free space.

BUT: I cannot mount the lvm-data-partition any more. On our old server there was
Code:
/dev/mapper/pve--raid-data on /mnt/raid10-pve4 type ext4 (rw,relatime,data=ordered)
But now on 4.2 I've got:
Code:
mount /dev/mapper/pve-data /mnt/pve-data/
mount: wrong fs type, bad option, bad superblock on /dev/mapper/pve-data,
  missing codepage or helper program, or other error
  In some cases useful info is found in syslog - try
  dmesg | tail or so.
When I use "lsblk" there are also big differences between our Servers. On our new Server I get this:
Code:
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0 931.5G  0 disk
├─sda1                         8:1    0  1007K  0 part
├─sda2                         8:2    0   127M  0 part
└─sda3                         8:3    0 931.4G  0 part
  ├─pve-root                 251:0    0   100G  0 lvm  /
  ├─pve-swap                 251:1    0     7G  0 lvm  [SWAP]
  ├─pve-data_tmeta           251:2    0   104M  0 lvm
  │ └─pve-data-tpool         251:4    0 808.4G  0 lvm
  │   ├─pve-data             251:5    0 808.4G  0 lvm
  │   ├─pve-vm--501--disk--1 251:6    0    32G  0 lvm
  │   ├─pve-vm--501--disk--2 251:7    0     1G  0 lvm
  │   ├─pve-vm--502--disk--1 251:8    0     5G  0 lvm
  │   ├─pve-vm--503--disk--1 251:9    0    20G  0 lvm
  │   ├─pve-vm--503--disk--2 251:10   0    50G  0 lvm
  │   └─pve-vm--506--disk--1 251:12   0    22G  0 lvm
  └─pve-data_tdata           251:3    0 808.4G  0 lvm
    └─pve-data-tpool         251:4    0 808.4G  0 lvm
      ├─pve-data             251:5    0 808.4G  0 lvm
      ├─pve-vm--501--disk--1 251:6    0    32G  0 lvm
      ├─pve-vm--501--disk--2 251:7    0     1G  0 lvm
      ├─pve-vm--502--disk--1 251:8    0     5G  0 lvm
      ├─pve-vm--503--disk--1 251:9    0    20G  0 lvm
      ├─pve-vm--503--disk--2 251:10   0    50G  0 lvm
      └─pve-vm--506--disk--1 251:12   0    22G  0 lvm

But neither "mount" nor "fdisk -l" will show me the BIG LVM (pve-data 808.4G lvm) ...
So what to do (or what to mount) to get access to this BIG free space? I think it has something to do with the new/pre-installed "LVM thin support" -- but how to add free space there?

(I simply wanted to copy the backup via scp to that location...)
Thanks for a hint.
 
Last edited:
This big free space is used for a LVM Thin pool, which has not a filesystem and can not be mounted. Instead, when you create a VM there, it allocates a new logical volume in this thin pool.
for details see https://pve.proxmox.com/wiki/LVM2

depending on how big the backup file is, you can simply copy it on the root filesystem,
or you can temporarily use a usb drive/network share
or you can create a logical volume manually which you then can mount (see my previous link for a guide to this)
 
Hi dcsapak,

What're the benefits for doing storage this way now? I installed a new node today and noticed local-lvm storage and lack of /dev/pve/data being mounted at /var/lib/vz

In my case as it was a fresh server, and to bring it in line with the rest of my cluster I have mounted it traditionally via fstab and ext4 but I am curious as to why you decide to use LVM subvols as the default option now

Thanks
 
What're the benefits for doing storage this way now?
for example:
snapshots (also for containers without zfs or ceph, etc.)
no additional filesystem layer between the disk and the containers/vms
 
Hi dcsapak,

What're the benefits for doing storage this way now? I installed a new node today and noticed local-lvm storage and lack of /dev/pve/data being mounted at /var/lib/vz

In my case as it was a fresh server, and to bring it in line with the rest of my cluster I have mounted it traditionally via fstab and ext4 but I am curious as to why you decide to use LVM subvols as the default option now

Thanks
You could manualle create a volume in the pool and mount it on /var/lib/vz
 
If I have an older install (that was Proxmox 3) but has been upgraded to Proxmox 4, what would the process be to convert the existing pve-data volume to be in line with your now recommended practices?

(Obviously will back up existing VMs first)..

I assume something along the lines of:
  • Back up VMs
  • Unmount /var/lib/vz
  • Remove from /etc/fstab
  • Remove pve-data volume
  • Create new volume in line with how proxmox now provisions it?
  • Add new storage type LVM-Thin using the new pve-data volume?
  • Re-import VMs using the new storage volume
Am I missing anything?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!