Hello again everybody!
I have a 3-node cluster running Proxmox 6.2-4, with ceph storage. On one of my nodes, I was trying to create a new VM on the local-lvm pve/data storage but it is failing.
Activation of logical volume pve/data is prohibited while logical volume pve/data_tmeta is active.
TASK ERROR: unable to create VM 123 - lvcreate 'pve/vm-123-disk-0'
error: Aborting. Failed to locally activate thin pool pve/data.
It looks like this error from this post, but this post is related to the local-lvm storage on a single node system (not cluster).
https://forum.proxmox.com/threads/logical-volume-pve-data-is-not-a-thin-pool.54677/
I've attached an image of lvs -a output which shows the attributes of the pve local-lvm that are incorrect.
I'm afraid to follow those instructions from the link above, because this would require blowing away and recreating the local-lvm on 1 of the nodes in the cluster. Will this affect the other nodes (local-lvm). I'd really like to use that partition to store one of my VMs since I don't wish to use ceph storage for non-critical VMs. What are my options here?
Here are some additional details:
root@proxmox6:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 1.8T 0 part
├─pve-swap 253:8 0 8G 0 lvm [SWAP]
├─pve-root 253:9 0 96G 0 lvm /
├─pve-data_tmeta 253:10 0 15.8G 0 lvm
└─pve-data_tdata 253:11 0 1.7T 0 lvm
sdb 8:16 0 1.3T 0 disk
└─ceph--a1e853c7--c73c--4fa9--be40--f6d1912e4bda-osd--block--f03f2602--84d4--431b--b6e8--f72d304e13a3 253:0 0 1.3T 0 lvm
sdc 8:32 0 1.8T 0 disk
└─ceph--c056af9a--da3a--4c68--b242--07da19cfbaa5-osd--block--820965dc--101f--4c0c--8006--1cbbe2bcbf0c 253:1 0 1.8T 0 lvm
root@proxmox6:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,backup,vztmpl
lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir
rbd: ceph-FAST
content images
krbd 0
pool ceph-FAST
rbd: ceph-SLOW
content images
krbd 0
pool ceph-SLOW
nfs: Proxmox2-FS
export /FirstRAIDZ
path /mnt/pve/Proxmox2-FS
server 10.2.10.30
content rootdir,backup,iso,snippets,vztmpl,images
maxfiles 3
I have a 3-node cluster running Proxmox 6.2-4, with ceph storage. On one of my nodes, I was trying to create a new VM on the local-lvm pve/data storage but it is failing.
Activation of logical volume pve/data is prohibited while logical volume pve/data_tmeta is active.
TASK ERROR: unable to create VM 123 - lvcreate 'pve/vm-123-disk-0'
error: Aborting. Failed to locally activate thin pool pve/data.
It looks like this error from this post, but this post is related to the local-lvm storage on a single node system (not cluster).
https://forum.proxmox.com/threads/logical-volume-pve-data-is-not-a-thin-pool.54677/
I've attached an image of lvs -a output which shows the attributes of the pve local-lvm that are incorrect.
I'm afraid to follow those instructions from the link above, because this would require blowing away and recreating the local-lvm on 1 of the nodes in the cluster. Will this affect the other nodes (local-lvm). I'd really like to use that partition to store one of my VMs since I don't wish to use ceph storage for non-critical VMs. What are my options here?
Here are some additional details:
root@proxmox6:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 1.8T 0 part
├─pve-swap 253:8 0 8G 0 lvm [SWAP]
├─pve-root 253:9 0 96G 0 lvm /
├─pve-data_tmeta 253:10 0 15.8G 0 lvm
└─pve-data_tdata 253:11 0 1.7T 0 lvm
sdb 8:16 0 1.3T 0 disk
└─ceph--a1e853c7--c73c--4fa9--be40--f6d1912e4bda-osd--block--f03f2602--84d4--431b--b6e8--f72d304e13a3 253:0 0 1.3T 0 lvm
sdc 8:32 0 1.8T 0 disk
└─ceph--c056af9a--da3a--4c68--b242--07da19cfbaa5-osd--block--820965dc--101f--4c0c--8006--1cbbe2bcbf0c 253:1 0 1.8T 0 lvm
root@proxmox6:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,backup,vztmpl
lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir
rbd: ceph-FAST
content images
krbd 0
pool ceph-FAST
rbd: ceph-SLOW
content images
krbd 0
pool ceph-SLOW
nfs: Proxmox2-FS
export /FirstRAIDZ
path /mnt/pve/Proxmox2-FS
server 10.2.10.30
content rootdir,backup,iso,snippets,vztmpl,images
maxfiles 3