INFO: mode failure - unable to detect lvm volume group

X

xocean

Guest
Hello, good afternoon

This is my first post, I salute the community.

Please could someone tell me where the failure or which direction to take?

***

INFO: starting new backup job: vzdump 101 --remove 0 --mode snapshot --compress
gzip --storage Copias_75 --node nsxxxxxx
INFO: Starting Backup of VM 101
(openvz)
INFO: CTID 101 exist mounted running
INFO: status =
running
INFO: mode failure - unable to detect lvm volume group
INFO:
trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice
.
.
.
INFO: Backup job finished
successfully
TASK OK


Thanks for all
 
Last edited by a moderator:
pls post the output of 'pvdisplay' and 'cat /etc/pve/storage.cfg'
 
Hi Tom,

pls post the output of 'pvdisplay' and 'cat /etc/pve/storage.cfg'



cat /etc/pve/storage.cfg

dir: Copias_75
path /backups
shared
content images,iso,vztmpl,rootdir,backup
maxfiles 50
dir: local
path /var/lib/vz
content images,iso,vztmpl,rootdir
maxfiles 0
dir: Copias_300
path /backups
shared
content images,iso,vztmpl,rootdir,backup
maxfiles 300
nodes ksxxxxxx

pvdisplay

--- Physical volume ---
PV Name /dev/sda3
VG Name vg1
PV Size 97.14 GiB / not usable 2.97 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 24867
Free PE 4387
Allocated PE 20480
PV UUID LaMsxi-soKD-dDlD-iQRj-XZ9a-maHe-Q89VQi

vgdisplay

--- Volume group ---
VG Name vg1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 97.14 GiB
PE Size 4.00 MiB
Total PE 24867
Alloc PE / Size 20480 / 80.00 GiB
Free PE / Size 4387 / 17.14 GiB
VG UUID OnzZRe-Wa7Y-2Fd6-N9Xd-IFPw-IjYE-Fy4T2a

vdisplay

--- Logical volume ---
LV Path /dev/vg1/lv1
LV Name lv1
VG Name vg1
LV UUID voEh1d-gZNz-UKFG-BeEu-ifll-e26i-uRFBxc
LV Write Access read/write
LV Creation host, time nsxxxxxx, 2013-01-31 19:44:46 +0100
LV Status available
# open 1
LV Size 80.00 GiB
Current LE 20480
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

df -vh

Filesystem Size Used Avail Use% Mounted on
none 12G 228K 12G 1% /dev
/dev/sda1 13G 1.5G 11G 13% /
tmpfs 12G 0 12G 0% /lib/init/rw
tmpfs 12G 23M 12G 1% /dev/shm
/dev/sdb1 111G 8.9G 96G 9% /backups
/dev/fuse 30M 20K 30M 1% /etc/pve
tmpfs 12G 8.0K 12G 1% /tmp
tmpfs 12G 8.0K 12G 1% /tmp
/dev/mapper/vg1-lv1 79G 184M 75G 1% /var/lib/vz


Thanks for all
 
I apologize for forwarding the message so many times, now I just read.

Should you leave 5 seconds to read the entire message, not enough time to read.

I apologize again.

THANKS FOR EVERYTHING
 
Hi Dietmar,


vi /etc/vzdump.conf

#tmpdir: DIR
#dumpdir: DIR
#storage: STORAGE_ID
mode: snapshot|suspend|stop
#bwlimit: KBPS
#ionice: PRI
#lockwait: MINUTES
#stopwait: MINUTES
size: 7000
maxfiles: 15
#script: FILENAME
#exclude-path: PATHLIST

Thanks a lot
 
Hello, good afternoon.

Please, is there any solution to this potential problem, it could be an unknown bug?


Thanks for all
 
Hi dietmar,

/etc/vz/vz.conf

## Global parameters
VIRTUOZZO=yes
LOCKDIR=/var/lib/vz/lock
DUMPDIR=/var/lib/vz/dump
VE0CPUUNITS=1000
## Logging parameters
LOGGING=yes
LOGFILE=/var/log/vzctl.log
LOG_LEVEL=0
VERBOSE=0
## Disk quota parameters
DISK_QUOTA=yes
VZFASTBOOT=no
# Disable module loading. If set, vz initscript does not load any modules.
#MODULES_DISABLED=yes
# The name of the device whose IP address will be used as source IP for CT.
# By default automatically assigned.
#VE_ROUTE_SRC_DEV="eth0"
# Controls which interfaces to send ARP requests and modify ARP tables on.
NEIGHBOUR_DEVS=detect
## Fail if there is another machine in the network with the same IP
ERROR_ON_ARPFAIL="no"
## Template parameters
TEMPLATE=/var/lib/vz/template
## Defaults for containers
VE_ROOT=/var/lib/vz/root/$VEID
VE_PRIVATE=/var/lib/vz/private/$VEID
## Filesystem layout for new CTs: either simfs (default) or ploop
#VE_LAYOUT=ploop
## Load vzwdog module
VZWDOG="no"
## IPv4 iptables kernel modules to be enabled in CTs by default
##IPTABLES="ipt_REJECT ipt_tos ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length"
## IPv4 iptables kernel modules to be loaded by init.d/vz script
IPTABLES_MODULES="$IPTABLES"
IPTABLES="ipt_REJECT ipt_recent ipt_owner ipt_REDIRECT ipt_tos ipt_TOS ipt_LOG ip_conntrack ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length ipt_state iptable_nat ip_nat_ftp"
## Enable IPv6
IPV6="yes"
## IPv6 ip6tables kernel modules
IP6TABLES="ip6_tables ip6table_filter ip6table_mangle ip6t_REJECT"




Thank you !
 
Hi dietmar,

ONBOOT="no"
PHYSPAGES="0:13000M"
SWAPPAGES="0:2000M"
KMEMSIZE="5909M:6500M"
DCACHESIZE="2954M:3250M"
LOCKEDPAGES="6500M"
PRIVVMPAGES="unlimited"
SHMPAGES="unlimited"
NUMPROC="unlimited"
VMGUARPAGES="0:unlimited"
OOMGUARPAGES="0:unlimited"
NUMTCPSOCK="unlimited"
NUMFLOCK="unlimited"
NUMPTY="unlimited"
NUMSIGINFO="unlimited"
TCPSNDBUF="unlimited"
TCPRCVBUF="unlimited"
OTHERSOCKBUF="unlimited"
DGRAMRCVBUF="unlimited"
NUMOTHERSOCK="unlimited"
NUMFILE="unlimited"
NUMIPTENT="unlimited"
# Disk quota parameters (in form of softlimit:hardlimit)
DISKSPACE="75G:84480M"
DISKINODES="15000000:16500000"
QUOTATIME="0"
QUOTAUGIDLIMIT="0"
# CPU fair scheduler parameter
CPUUNITS="1000"
CPUS="8"
HOSTNAME="xxxxxx.xxx.xxx"
SEARCHDOMAIN="xxxxx.xxx.xxx"
NAMESERVER="xxx.xxx.xxx.xxx xxx.xxx.xxx.xxx"
IP_ADDRESS="xxx.xxx.xxx.xxx"
VE_ROOT="/var/lib/vz/root/120"
VE_PRIVATE="/backups/private/120"
OSTEMPLATE="xxxxxxx-xx-xxx_xx.tar.gz"
FEATURES="nfs: on"

Thanks a lot !
 
I have seen the same when I do automatic backup. The only difference between the two nodes is that the node which does not fail is installed ontop of a Debian while the failing one in installed using defaults from the bare bones installer. Snippets from log below:

OpenVZ without problems:
vzdump 109 114 115 112 117 101 --quiet 1 --mailto xxx@yyy --mode snapshot --compress lzo --storage qnap_nfs

101: Feb 02 05:15:01 INFO: Starting Backup of VM 101 (openvz)
101: Feb 02 05:15:01 INFO: CTID 101 exist mounted running
101: Feb 02 05:15:01 INFO: status = running
101: Feb 02 05:15:01 INFO: backup mode: snapshot
101: Feb 02 05:15:01 INFO: ionice priority: 7
101: Feb 02 05:15:01 INFO: creating lvm snapshot of /dev/mapper/pve-data ('/dev/pve/vzsnap-esx1-0')
101: Feb 02 05:15:01 INFO: Logical volume "vzsnap-esx1-0" created
101: Feb 02 05:15:01 INFO: creating archive '/mnt/pve/qnap_nfs/dump/vzdump-openvz-101-2013_02_02-05_15_01.tar.lzo'
101: Feb 02 05:15:12 INFO: Total bytes written: 556912640 (532MiB, 52MiB/s)
101: Feb 02 05:15:30 INFO: archive file size: 296MB
101: Feb 02 05:15:30 INFO: delete old backup '/mnt/pve/qnap_nfs/dump/vzdump-openvz-101-2013_01_26-05_15_02.tar.lzo'
101: Feb 02 05:15:31 INFO: Finished Backup of VM 101 (00:00:30)

OpenVZ with problems:
vzdump 109 114 115 112 117 101 --quiet 1 --mailto xxx@yyy --mode snapshot --compress lzo --storage qnap_nfs

112: Feb 02 05:15:02 INFO: Starting Backup of VM 112 (openvz)
112: Feb 02 05:15:02 INFO: CTID 112 exist mounted running
112: Feb 02 05:15:02 INFO: status = running
112: Feb 02 05:15:02 INFO: mode failure - unable to detect lvm volume group
112: Feb 02 05:15:02 INFO: trying 'suspend' mode instead
112: Feb 02 05:15:02 INFO: backup mode: suspend
112: Feb 02 05:15:02 INFO: ionice priority: 7
112: Feb 02 05:15:02 INFO: starting first sync /mnt/pve/qnap_nfs/private/112/ to /mnt/pve/qnap_nfs/dump/vzdump-openvz-112-2013_02_02-05_15_02.tmp
112: Feb 02 05:24:50 INFO: Number of files: 36205
112: Feb 02 05:24:50 INFO: Number of files transferred: 28741
112: Feb 02 05:24:50 INFO: Total file size: 1403524354 bytes
112: Feb 02 05:24:50 INFO: Total transferred file size: 1400101483 bytes
112: Feb 02 05:24:50 INFO: Literal data: 1400101483 bytes
112: Feb 02 05:24:50 INFO: Matched data: 0 bytes
112: Feb 02 05:24:50 INFO: File list size: 838756
112: Feb 02 05:24:50 INFO: File list generation time: 0.055 seconds
112: Feb 02 05:24:50 INFO: File list transfer time: 0.000 seconds
112: Feb 02 05:24:50 INFO: Total bytes sent: 1402370445
112: Feb 02 05:24:50 INFO: Total bytes received: 587293
112: Feb 02 05:24:50 INFO: sent 1402370445 bytes received 587293 bytes 2383955.37 bytes/sec
112: Feb 02 05:24:50 INFO: total size is 1403524354 speedup is 1.00
112: Feb 02 05:24:50 INFO: first sync finished (588 seconds)
112: Feb 02 05:24:50 INFO: suspend vm
112: Feb 02 05:24:50 INFO: Setting up checkpoint...
112: Feb 02 05:24:50 INFO: suspend...
112: Feb 02 05:24:50 INFO: get context...
112: Feb 02 05:24:50 INFO: Checkpointing completed successfully
112: Feb 02 05:24:50 INFO: starting final sync /mnt/pve/qnap_nfs/private/112/ to /mnt/pve/qnap_nfs/dump/vzdump-openvz-112-2013_02_02-05_15_02.tmp
112: Feb 02 05:25:57 INFO: Number of files: 36205
112: Feb 02 05:25:57 INFO: Number of files transferred: 0
112: Feb 02 05:25:57 INFO: Total file size: 1403524354 bytes
112: Feb 02 05:25:57 INFO: Total transferred file size: 0 bytes
112: Feb 02 05:25:57 INFO: Literal data: 0 bytes
112: Feb 02 05:25:57 INFO: Matched data: 0 bytes
112: Feb 02 05:25:57 INFO: File list size: 838756
112: Feb 02 05:25:57 INFO: File list generation time: 0.255 seconds
112: Feb 02 05:25:57 INFO: File list transfer time: 0.000 seconds
112: Feb 02 05:25:57 INFO: Total bytes sent: 841401
112: Feb 02 05:25:57 INFO: Total bytes received: 2644
112: Feb 02 05:25:57 INFO: sent 841401 bytes received 2644 bytes 12504.37 bytes/sec
112: Feb 02 05:25:57 INFO: total size is 1403524354 speedup is 1662.85
112: Feb 02 05:25:57 INFO: final sync finished (67 seconds)
112: Feb 02 05:25:57 INFO: resume vm
112: Feb 02 05:25:57 INFO: Resuming...
112: Feb 02 05:25:57 INFO: vm is online again after 67 seconds
112: Feb 02 05:25:58 INFO: creating archive '/mnt/pve/qnap_nfs/dump/vzdump-openvz-112-2013_02_02-05_15_02.tar.lzo'
112: Feb 02 05:30:01 INFO: Total bytes written: 1426124800 (1.4GiB, 6.5MiB/s)
112: Feb 02 05:30:11 INFO: archive file size: 1022MB
112: Feb 02 05:30:11 INFO: delete old backup '/mnt/pve/qnap_nfs/dump/vzdump-openvz-112-2013_01_26-05_15_01.tar.lzo'
112: Feb 02 05:31:35 INFO: Finished Backup of VM 112 (00:16:33)
 
Hi Dietmar,

This container is stored inside '/backups/' which is /dev/sdb1 (there is no LVM on /dev/sdb1).
I have done this configuration,

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): 8e

Thanks for all
 
Last edited by a moderator:
Hi dietmar,

I have this configuration:

Filesystem Size Used Avail Use% Mounted on
none 12G 232K 12G 1% /dev
/dev/sda1 13G 9.4G 2.7G 79% /
tmpfs 12G 0 12G 0% /lib/init/rw
tmpfs 12G 23M 12G 1% /dev/shm
/dev/sdb1 111G 11G 95G 10% /backups
/dev/fuse 30M 20K 30M 1% /etc/pve
/dev/mapper/vg1-lv1 60G 180M 56G 1% /var/lib/vz
/dev/mapper/vg1-backup
20G 173M 19G 1% /backup

I understand to Proxmox can not make a snapshot on the same partition LVM / var / lib / vz where all the virtual machines, for this reason I created /dev/mapper/vg1-backup with mke2fs.


fdisk -l

Device Boot Start End Blocks Id System
/dev/sda1 * 1 1658 13310976+ 83 Linux
/dev/sda2 1658 1913 2046976 82 Linux swap / Solaris
/dev/sda3 1913 14593 101858272 8e Linux LVM

Device Boot Start End Blocks Id System
/dev/sdb1 1 14593 117218241 8e Linux LVM


As you told me, I changed system type of partition /dev/sdb1 to 8e (Linux LVM) , but the same error keeps coming.


INFO: starting new backup job: vzdump 333 --remove 0 --mode snapshot --compress gzip --storage copy123 --node moon
INFO: Starting Backup of VM 333 (openvz)
INFO: CTID 333 exist mounted running
INFO: status = running
INFO: mode failure - unable to detect lvm volume group
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: starting first sync /backups/private/333/ to /backup/dump/vzdump-openvz-333-2013_02_03-22_48_19.tmp


The error does not take it for lack of space, it gives the non-recognition of lvm.

I'm working through a cluster, the failure could come around?


Thank you very much for everything!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!