local-lvm disabled

SergioRius

Renowned Member
Mar 11, 2015
24
2
68
Last week I added a new node to the cluster, and didn't notice it, but other node lost it's local-lvm in the process.
I've made a cluster only for the convenience of managing them from the same address.

When I noticed, this node had all the containers shut down and the local-lvm shows an interrogation mark. When I try to open it shows the following message:
activating LV 'pve/data' failed: Thin pool pve-data-tpool (253:7) transaction_id is 0, while expected 22

Is there any way of changing that id so it matches and loads the lvm?

Some outputs:
Code:
lvscan
  ACTIVE            '/dev/pve/swap' [4.00 GiB] inherit
  ACTIVE            '/dev/pve/root' [55.75 GiB] inherit
  inactive          '/dev/pve/data' [144.37 GiB] inherit
  inactive          '/dev/pve/vm-123-disk-0' [4.00 GiB] inherit
  inactive          '/dev/pve/vm-124-disk-0' [4.00 GiB] inherit
  inactive          '/dev/pve/vm-125-disk-0' [8.00 GiB] inherit
  inactive          '/dev/pve/vm-102-disk-0' [9.00 GiB] inherit
  inactive          '/dev/pve/vm-127-disk-0' [9.00 GiB] inherit
  inactive          '/dev/pve/vm-130-disk-0' [4.00 GiB] inherit
  ACTIVE            '/dev/pve/data_meta0' [<1.48 GiB] inherit
  ACTIVE            '/dev/pve/data_meta1' [<1.48 GiB] inherit
  ACTIVE            '/dev/pve/data_meta2' [<1.48 GiB] inherit

Code:
pvscan
  PV /dev/sda3   VG pve             lvm2 [<223.07 GiB / 11.56 GiB free]
  Total: 1 [<223.07 GiB] / in use: 1 [<223.07 GiB] / in no VG: 0 [0   ]

Code:
vgscan
  Found volume group "pve" using metadata type lvm2

Code:
lvm lvchange -ay /dev/pve/vm-102-disk-0
  Thin pool pve-data-tpool (253:7) transaction_id is 0, while expected 22.

Code:
cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images
        nodes deathshadow

dir: bulk
        path /mnt/bulk
        content backup,vztmpl,iso
        is_mountpoint 1
        mkdir 0
        nodes deathshadow
        prune-backups keep-all=1
        shared 0

lvmthin: core-local-lvm
        thinpool data
        vgname pve
        content images,rootdir
        nodes core

lvmthin: altair-local-lvm        <== The problematic node.
        thinpool data                Entry had dissapeared but
        vgname pve                    I added it back
        content images,rootdir
        nodes altair

lvmthin: vega-local-lvm
        thinpool data
        vgname pve
        content rootdir,images
        nodes vega

lvmthin: betelgeuse-local-lvm
        thinpool data
        vgname pve
        content rootdir,images
        nodes betelgeuse

nfs: backup
        export /export/backup/proxmox
        path /mnt/pve/backup
        server 10.1.2.30
        content backup
        prune-backups keep-all=1
 
Last edited: