update proxmox with error Metadata location begins with invalid VG name

lupin3rd

New Member
Oct 22, 2019
3
0
1
46
Today i try to update my pve and i give some errors with metadata:
Code:
Setting up initramfs-tools (0.133+deb10u1) ...
update-initramfs: deferring update (trigger activated)
Setting up gpg-wks-client (2.2.12-1+deb10u1) ...
Setting up openssh-server (1:7.9p1-10+deb10u1) ...
rescue-ssh.target is a disabled or a static unit, not starting it.
Setting up pve-kernel-5.0.21-3-pve (5.0.21-7) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 5.0.21-3-pve /boot/vmlinuz-5.0.21-3-pve
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 5.0.21-3-pve /boot/vmlinuz-5.0.21-3-pve
update-initramfs: Generating /boot/initrd.img-5.0.21-3-pve
run-parts: executing /etc/kernel/postinst.d/pve-auto-removal 5.0.21-3-pve /boot/vmlinuz-5.0.21-3-pve
run-parts: executing /etc/kernel/postinst.d/zz-pve-efiboot 5.0.21-3-pve /boot/vmlinuz-5.0.21-3-pve
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 5.0.21-3-pve /boot/vmlinuz-5.0.21-3-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.0.21-3-pve
Found initrd image: /boot/initrd.img-5.0.21-3-pve
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
Failed to scan VG from /dev/sda3
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
/usr/sbin/grub-probe: error: disk `lvmid/qlJGcB-Il4q-xSf3-Pdey-6Rre-eDeW-2bfkmL/jqbanF-B7N9-QfhQ-mOTQ-djuk-75Iq-39wwh7' not found.
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
/usr/sbin/grub-probe: error: disk `lvmid/qlJGcB-Il4q-xSf3-Pdey-6Rre-eDeW-2bfkmL/jqbanF-B7N9-QfhQ-mOTQ-djuk-75Iq-39wwh7' not found.
Found linux image: /boot/vmlinuz-5.0.18-1-pve
Found initrd image: /boot/initrd.img-5.0.18-1-pve
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
/usr/sbin/grub-probe: error: disk `lvmid/qlJGcB-Il4q-xSf3-Pdey-6Rre-eDeW-2bfkmL/jqbanF-B7N9-QfhQ-mOTQ-djuk-75Iq-39wwh7' not found.
Found linux image: /boot/vmlinuz-5.0.15-1-pve
Found initrd image: /boot/initrd.img-5.0.15-1-pve
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
/usr/sbin/grub-probe: error: disk `lvmid/qlJGcB-Il4q-xSf3-Pdey-6Rre-eDeW-2bfkmL/jqbanF-B7N9-QfhQ-mOTQ-djuk-75Iq-39wwh7' not found.
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
/usr/sbin/grub-probe: error: disk `lvmid/qlJGcB-Il4q-xSf3-Pdey-6Rre-eDeW-2bfkmL/jqbanF-B7N9-QfhQ-mOTQ-djuk-75Iq-39wwh7' not found.
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
done
Setting up pve-cluster (6.0-7) ...
Setting up zfsutils-linux (0.8.2-pve1) ...
Installing new version of config file /etc/zfs/zfs-functions ...
Created symlink /etc/systemd/system/zfs-volumes.target.wants/zfs-volume-wait.service -> /lib/systemd/system/zfs-volume-wait.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-volumes.target -> /lib/systemd/system/zfs-volumes.target.
zfs-import-scan.service is a disabled or a static unit not running, not starting it.
Setting up zfs-initramfs (0.8.2-pve1) ...
Setting up pve-kernel-5.0 (6.0-9) ...
Setting up gnupg (2.2.12-1+deb10u1) ...
Setting up ssh (1:7.9p1-10+deb10u1) ...
Setting up ceph-common (12.2.11+dfsg1-2.1+b1) ...
Setting up python-cephfs (12.2.11+dfsg1-2.1+b1) ...
Setting up pve-qemu-kvm (4.0.0-7) ...
Setting up libpve-storage-perl (6.0-9) ...
Setting up pve-container (3.0-7) ...
system-pve\x2dcontainer.slice is a disabled or a static unit, not starting it.
Setting up qemu-server (6.0-9) ...
Setting up pve-manager (6.0-9) ...
Processing triggers for systemd (241-7~deb10u1) ...
Processing triggers for man-db (2.8.5-2) ...
Processing triggers for ntp (1:4.2.8p12+dfsg-4) ...
Processing triggers for dbus (1.12.16-1) ...
Processing triggers for pve-ha-manager (3.0-2) ...
Processing triggers for mime-support (3.62) ...
Processing triggers for libc-bin (2.28-10) ...
Processing triggers for initramfs-tools (0.133+deb10u1) ...
update-initramfs: Generating /boot/initrd.img-5.0.21-3-pve
Running hook script 'zz-pve-efiboot'..
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.

Your System is up-to-date
I don't try to reboot the pve because i'm not sure if it restart correctly.
Anyone has same errors on apt update ??
Thank you.
 
Hi,

please send the output of this commands

Code:
pveversion -v
lvs -a
findmnt -a
 
Code:
root@pve1:/# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve)
pve-manager: 6.0-9 (running version: 6.0-9/508dcee0)
pve-kernel-5.0: 6.0-9
pve-kernel-helper: 6.0-9
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.18-1-pve: 5.0.18-3
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-5
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.0-9
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-65
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-8
pve-cluster: 6.0-7
pve-container: 3.0-7
pve-docs: 6.0-7
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.0-7
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-9
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve1

Code:
root@pve1:/# lvs -a
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3

Code:
root@pve1:/# findmnt -a
TARGET                                SOURCE     FSTYPE  OPTIONS
/                                     /dev/mapper/pve-root
|                                                ext4    rw,relatime,errors=remount-ro
|-/sys                                sysfs      sysfs   rw,nosuid,nodev,noexec,relati
| |-/sys/kernel/security              securityfs securit rw,nosuid,nodev,noexec,relati
| |-/sys/fs/cgroup                    tmpfs      tmpfs   ro,nosuid,nodev,noexec,mode=7
| | |-/sys/fs/cgroup/unified          cgroup2    cgroup2 rw,nosuid,nodev,noexec,relati
| | |-/sys/fs/cgroup/systemd          cgroup     cgroup  rw,nosuid,nodev,noexec,relati
| | |-/sys/fs/cgroup/perf_event       cgroup     cgroup  rw,nosuid,nodev,noexec,relati
| | |-/sys/fs/cgroup/rdma             cgroup     cgroup  rw,nosuid,nodev,noexec,relati
| | |-/sys/fs/cgroup/memory           cgroup     cgroup  rw,nosuid,nodev,noexec,relati
| | |-/sys/fs/cgroup/pids             cgroup     cgroup  rw,nosuid,nodev,noexec,relati
| | |-/sys/fs/cgroup/net_cls,net_prio cgroup     cgroup  rw,nosuid,nodev,noexec,relati
| | |-/sys/fs/cgroup/blkio            cgroup     cgroup  rw,nosuid,nodev,noexec,relati
| | |-/sys/fs/cgroup/cpu,cpuacct      cgroup     cgroup  rw,nosuid,nodev,noexec,relati
| | |-/sys/fs/cgroup/hugetlb          cgroup     cgroup  rw,nosuid,nodev,noexec,relati
| | |-/sys/fs/cgroup/cpuset           cgroup     cgroup  rw,nosuid,nodev,noexec,relati
| | |-/sys/fs/cgroup/freezer          cgroup     cgroup  rw,nosuid,nodev,noexec,relati
| | `-/sys/fs/cgroup/devices          cgroup     cgroup  rw,nosuid,nodev,noexec,relati
| |-/sys/fs/pstore                    pstore     pstore  rw,nosuid,nodev,noexec,relati
| |-/sys/fs/bpf                       bpf        bpf     rw,nosuid,nodev,noexec,relati
| |-/sys/kernel/debug                 debugfs    debugfs rw,relatime
| |-/sys/kernel/config                configfs   configf rw,relatime
| `-/sys/fs/fuse/connections          fusectl    fusectl rw,relatime
|-/proc                               proc       proc    rw,relatime
| `-/proc/sys/fs/binfmt_misc          systemd-1  autofs  rw,relatime,fd=29,pgrp=1,time
|-/dev                                udev       devtmpf rw,nosuid,relatime,size=50885
| |-/dev/pts                          devpts     devpts  rw,nosuid,noexec,relatime,gid
| |-/dev/shm                          tmpfs      tmpfs   rw,nosuid,nodev
| |-/dev/hugepages                    hugetlbfs  hugetlb rw,relatime,pagesize=2M
| `-/dev/mqueue                       mqueue     mqueue  rw,relatime
|-/run                                tmpfs      tmpfs   rw,nosuid,noexec,relatime,siz
| |-/run/lock                         tmpfs      tmpfs   rw,nosuid,nodev,noexec,relati
| `-/run/rpc_pipefs                   sunrpc     rpc_pip rw,relatime
|-/etc/pve                            /dev/fuse  fuse    rw,nosuid,nodev,relatime,user
|-/var/lib/lxcfs                      lxcfs      fuse.lx rw,nosuid,nodev,relatime,user
`-/mnt/pve/backup-nas-sit             nas-sit:/export/backupvm
                                                 nfs     rw,relatime,vers=3,rsize=5242
 
Today i try to an apt update and also with last packages versions i have the error
Code:
Setting up pve-kernel-5.0.21-5-pve (5.0.21-10) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 5.0.21-5-pve /boot/vmlinuz-5.0.21-5-pve
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 5.0.21-5-pve /boot/vmlinuz-5.0.21-5-pve
update-initramfs: Generating /boot/initrd.img-5.0.21-5-pve
run-parts: executing /etc/kernel/postinst.d/pve-auto-removal 5.0.21-5-pve /boot/vmlinuz-5.0.21-5-pve
run-parts: executing /etc/kernel/postinst.d/zz-pve-efiboot 5.0.21-5-pve /boot/vmlinuz-5.0.21-5-pve
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 5.0.21-5-pve /boot/vmlinuz-5.0.21-5-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.0.21-5-pve
Found initrd image: /boot/initrd.img-5.0.21-5-pve
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
/usr/sbin/grub-probe: error: disk `lvmid/qlJGcB-Il4q-xSf3-Pdey-6Rre-eDeW-2bfkmL/jqbanF-B7N9-QfhQ-mOTQ-djuk-75Iq-39wwh7' not found.
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
/usr/sbin/grub-probe: error: disk `lvmid/qlJGcB-Il4q-xSf3-Pdey-6Rre-eDeW-2bfkmL/jqbanF-B7N9-QfhQ-mOTQ-djuk-75Iq-39wwh7' not found.
Found linux image: /boot/vmlinuz-5.0.21-3-pve
Found initrd image: /boot/initrd.img-5.0.21-3-pve
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
/usr/sbin/grub-probe: error: disk `lvmid/qlJGcB-Il4q-xSf3-Pdey-6Rre-eDeW-2bfkmL/jqbanF-B7N9-QfhQ-mOTQ-djuk-75Iq-39wwh7' not found.
Found linux image: /boot/vmlinuz-5.0.18-1-pve
Found initrd image: /boot/initrd.img-5.0.18-1-pve
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
/usr/sbin/grub-probe: error: disk `lvmid/qlJGcB-Il4q-xSf3-Pdey-6Rre-eDeW-2bfkmL/jqbanF-B7N9-QfhQ-mOTQ-djuk-75Iq-39wwh7' not found.
Found linux image: /boot/vmlinuz-5.0.15-1-pve
Found initrd image: /boot/initrd.img-5.0.15-1-pve
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
/usr/sbin/grub-probe: error: disk `lvmid/qlJGcB-Il4q-xSf3-Pdey-6Rre-eDeW-2bfkmL/jqbanF-B7N9-QfhQ-mOTQ-djuk-75Iq-39wwh7' not found.
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
  Metadata location on /dev/sda3 at 171008 begins with invalid VG name.
  Failed to read metadata summary from /dev/sda3
  Failed to scan VG from /dev/sda3
/usr/sbin/grub-probe: error: disk `lvmid/qlJGcB-Il4q-xSf3-Pdey-6Rre-eDeW-2bfkmL/jqbanF-B7N9-QfhQ-mOTQ-djuk-75Iq-39wwh7' not found.
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
done
Setting up libpve-access-control (6.0-3) ...
Setting up libnvpair1linux (0.8.2-pve2) ...
Setting up libzfs2linux (0.8.2-pve2) ...
Setting up pve-container (3.0-10) ...
system-pve\x2dcontainer.slice is a disabled or a static unit, not starting it.
Setting up qemu-server (6.0-13) ...
Setting up pve-kernel-5.0 (6.0-11) ...
Setting up libzpool2linux (0.8.2-pve2) ...
Setting up pve-manager (6.0-11) ...
Setting up zfsutils-linux (0.8.2-pve2) ...
zfs-import-scan.service is a disabled or a static unit not running, not starting it.
Setting up zfs-initramfs (0.8.2-pve2) ...
Processing triggers for mime-support (3.62) ...
Processing triggers for initramfs-tools (0.133+deb10u1) ...
update-initramfs: Generating /boot/initrd.img-5.0.21-5-pve
Running hook script 'zz-pve-efiboot'..
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.
Processing triggers for libc-bin (2.28-10) ...
Processing triggers for systemd (241-7~deb10u1) ...
Processing triggers for man-db (2.8.5-2) ...
Processing triggers for pve-ha-manager (3.0-2) ...
 
Hi,
this had happened to me. First time my node allowed to move VMs to another node. The second time migration failed. Any kind of online fixes did not help as root partition uses the same VG. So. Below is action list which helped me to resolve the issue but unfortunately I do not know the root cause :(. What could lead to this strange state:
  • I had replaced RAID of two HDDs to RAID of two SSDs before the first occurrence.
  • With this replacement I reinstalled Proxmox to the latest available 6.1 version.
  • There might be faulty RAID (hardware, HP), but two times it complained to the same "metadata"... Does not seem to be really likely.
Now what to do if this happened.
  • First of all, the server won't survive reboot: root/boot/everything is on LVM and you remember we have problems with VG. So if your server is still online make backup of your /etc/lvm/pve/backup to somewhere outside the server.
  • Boot it from any bootable CD/DVD which supports your disk controller (in my case Centos ignored it so I used Ubuntu).
    • If you forgot/failed/missed copying /etc/lvm/pve/backup run testdisk to find lost partitions and extract pve file with its help. I ran Ubuntu live CD, installed tmux, openssh-server, testdisk, ran ssh (to not use iLO too much as less convenient).
  • Then run commands which are suggested around the internet (uuid is taken from backup file, which is /root/pve) in my case:
    Bash:
    VG_NAME="pve"
    VG_UUID="T4CnQi-vMAb-sHp5-VMkq-my7j-BNwZ-Ik1M0X"
    LVM_BACKUP=~/pve
    
    pvcreate $VG_NAME --uuid $VG_UUID --restorefile $LVM_BACKUP --verbose
    vgcfgrestore --file $LVM_BACKUP $VG_NAME --verbose

As I told previously second time my node did not allow to migrate all the VMs so before doing reboot I did backup of all VM disks:
Bash:
for DISK in $(ls /dev/mapper/pve-vm--*); do echo $DISK; dd if=$DISK bs=20M | gzip -2 > /backup/$(echo $DISK|awk -F "\/" '{print $4}').dd.gz ; done
And also it's good to backup /etc/pve/nodes/$nodename$/qemu-server/ if there is single node installation. For clusters not really relevant as both nodes keep info about VMs on each other.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!