local-lvm disabled

SergioRius

Active Member
Mar 11, 2015
21
1
43
Last week I added a new node to the cluster, and didn't notice it, but other node lost it's local-lvm in the process.
I've made a cluster only for the convenience of managing them from the same address.

When I noticed, this node had all the containers shut down and the local-lvm shows an interrogation mark. When I try to open it shows the following message:
activating LV 'pve/data' failed: Thin pool pve-data-tpool (253:7) transaction_id is 0, while expected 22

Is there any way of changing that id so it matches and loads the lvm?

Some outputs:
Code:
lvscan
  ACTIVE            '/dev/pve/swap' [4.00 GiB] inherit
  ACTIVE            '/dev/pve/root' [55.75 GiB] inherit
  inactive          '/dev/pve/data' [144.37 GiB] inherit
  inactive          '/dev/pve/vm-123-disk-0' [4.00 GiB] inherit
  inactive          '/dev/pve/vm-124-disk-0' [4.00 GiB] inherit
  inactive          '/dev/pve/vm-125-disk-0' [8.00 GiB] inherit
  inactive          '/dev/pve/vm-102-disk-0' [9.00 GiB] inherit
  inactive          '/dev/pve/vm-127-disk-0' [9.00 GiB] inherit
  inactive          '/dev/pve/vm-130-disk-0' [4.00 GiB] inherit
  ACTIVE            '/dev/pve/data_meta0' [<1.48 GiB] inherit
  ACTIVE            '/dev/pve/data_meta1' [<1.48 GiB] inherit
  ACTIVE            '/dev/pve/data_meta2' [<1.48 GiB] inherit

Code:
pvscan
  PV /dev/sda3   VG pve             lvm2 [<223.07 GiB / 11.56 GiB free]
  Total: 1 [<223.07 GiB] / in use: 1 [<223.07 GiB] / in no VG: 0 [0   ]

Code:
vgscan
  Found volume group "pve" using metadata type lvm2

Code:
lvm lvchange -ay /dev/pve/vm-102-disk-0
  Thin pool pve-data-tpool (253:7) transaction_id is 0, while expected 22.

Code:
cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images
        nodes deathshadow

dir: bulk
        path /mnt/bulk
        content backup,vztmpl,iso
        is_mountpoint 1
        mkdir 0
        nodes deathshadow
        prune-backups keep-all=1
        shared 0

lvmthin: core-local-lvm
        thinpool data
        vgname pve
        content images,rootdir
        nodes core

lvmthin: altair-local-lvm        <== The problematic node.
        thinpool data                Entry had dissapeared but
        vgname pve                    I added it back
        content images,rootdir
        nodes altair

lvmthin: vega-local-lvm
        thinpool data
        vgname pve
        content rootdir,images
        nodes vega

lvmthin: betelgeuse-local-lvm
        thinpool data
        vgname pve
        content rootdir,images
        nodes betelgeuse

nfs: backup
        export /export/backup/proxmox
        path /mnt/pve/backup
        server 10.1.2.30
        content backup
        prune-backups keep-all=1
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!