" mode failure - unable to detect lvm volume group" after resinstalation and update

juanmaria

Member
Apr 26, 2012
19
0
21
Hi,

I'm setting up a Proxmox server at OVH, one of the tests I'm doing before putting it on service is a simulation of a disaster.

So I backed-up al my VMs and configuration files, ordered a reinstalation and restored everything. After this restore I also updated my system which went from Proxmox 2.2-24 to Proxmox 2.2-31.

Almost everything is going fine but I can no longer make snapshots backups, I get the "unable to detect lvm volume group" error.

My backups are from Proxmox 2.2-24.

During the reinstallation I defined a LV 300Gb smaller than the total disk size to make room for snapshots, I had to reduce this LV size in my previous installation to be able to make snapshots, after this reduction they worked fine.

Some info:

Code:
# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content images,iso,vztmpl,backup,rootdir
        maxfiles 10

dir: backup
        path /backup
        content backup
        maxfiles 6


# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdb5
  VG Name               pve
  PV Size               1.80 TiB / not usable 2.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              471637
  Free PE               51894
  Allocated PE          419743
  PV UUID               PFx0jw-eYXK-04vc-Ualp-sD5a-wB3v-gBOCB2
   



# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/pve/data
  LV Name                data
  VG Name                pve
  LV UUID                V32rFE-qtvx-0qB4-eh27-5YWI-Vtsu-nOwsX9
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 0
  LV Size                1.60 TiB
  Current LE             419743
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0




#cat /proc/mounts
none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
none /proc proc rw,nosuid,nodev,noexec,relatime 0 0
none /dev devtmpfs rw,relatime,size=12328684k,nr_inodes=3082171,mode=755 0 0
none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
/dev/sdb1 / ext4 rw,relatime,errors=remount-ro,barrier=1,data=ordered 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0
/dev/sda1 /backup ext3 rw,relatime,errors=continue,barrier=0,data=ordered 0 0
rpc_pipefs /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
nfsd /proc/fs/nfsd nfsd rw,relatime 0 0
/dev/fuse /etc/pve fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
beancounter /proc/vz/beancounter cgroup rw,relatime,blkio,name=beancounter 0 0
container /proc/vz/container cgroup rw,relatime,freezer,devices,name=container 0 0
fairsched /proc/vz/fairsched cgroup rw,relatime,cpuacct,cpu,cpuset,name=fairsched 0 0
/var/lib/vz/private/102 /var/lib/vz/root/102 simfs rw,relatime 0 0
proc /var/lib/vz/root/102/proc proc rw,relatime 0 0
sysfs /var/lib/vz/root/102/sys sysfs rw,relatime 0 0
tmpfs /var/lib/vz/root/102/lib/init/rw tmpfs rw,nosuid,relatime,size=131072k,nr_inodes=32768,mode=755 0 0
tmpfs /var/lib/vz/root/102/dev/shm tmpfs rw,nosuid,nodev,relatime,size=131072k,nr_inodes=32768 0 0
devpts /var/lib/vz/root/102/dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0

Thanks in advance.
 
Re: " mode failure - unable to detect lvm volume group" after resinstalation and upda

Hi,
your lv pve-data isn't mounted (I miss the line "/dev/mapper/pve-data /var/lib/vz ext3 rw,relatime,errors=continue,barrier=0,data=ordered 0 0" in mounts), so your container are running on the root-filesystem!
Is in /etc/fstab an entry for /var/lib/vz?
What is the output of
Code:
vgs
lvs
cat /etc/fstab
Udo
 
Re: " mode failure - unable to detect lvm volume group" after resinstalation and upda

Thanks udo, you're right!

But I should have noticed it, what I cannot understand is the following:

Code:
# df -vh
Filesystem            Size  Used Avail Use% Mounted on
none                   12G  236K   12G   1% /dev
/dev/sdb1              20G  4.9G   14G  27% /
tmpfs                  12G     0   12G   0% /lib/init/rw
tmpfs                  12G   19M   12G   1% /dev/shm
/dev/sda1             1.8T   16G  1.7T   1% /backup
/dev/fuse              30M   16K   30M   1% /etc/pve

So it's clear that the LV has not been mounted.

But... How come I didn't get an error when restoring my 150Gb VMs in a 12Gb filesystem?
It really puzzles me :confused:

And there is this:
Code:
# ls -ahl
total 2.5G
drwxrwxrwx 2 root root 4.0K Nov 22 16:26 .
drwxr-xr-x 4 root root 4.0K Nov 22 16:26 ..
-rw-r--r-- 1 root root 150G Nov 22 18:51 vm-100-disk-1.raw

#du -sh /var/lib/vz/images/100/
2.5G    /var/lib/vz/images/100/

I've got a 150Gb file in this directory but du tells me that I've got only 2.5Gb occupied.

Is there something about this kind of files that I don't know?. A sort of sparse files?

Anyway, I've had to modify my fstab changing /dev/pve/data by its UUID to get it mounted properly at startup and now everything seems to work fine.

Thank you.
 
Re: " mode failure - unable to detect lvm volume group" after resinstalation and upda

Try: ls -lsh vm-100-disk-1.raw

The first number in the response line is the actual size of the file.
 
Re: " mode failure - unable to detect lvm volume group" after resinstalation and upda

Hi,

Code:
2.6G -rw-r--r-- 1 root root 150G Nov 23 08:23 vm-100-disk-1.raw

So Proxmox is using sparse files. Isn't it?.

Y didn't know nor used sparse files in Linux, so I now know something new.

Thank you.
 
Re: " mode failure - unable to detect lvm volume group" after resinstalation and upda

I don't think you can call it sparse files as such. The main idea is that the hypervisor reserves the amount of requested space in the file system making the space unavailable to other. It is still thick provisioning though.