[SOLVED] Logical volume pve/data is not a thin pool

peterG

Member
Jan 20, 2014
15
5
23
Boston, MA, US
Trying to do offline migration between 2 separate sole nodes at diffrent locations, NOT a cluster.

The final error reads: Logical volume pve/data is not a thin pool

However we have identical setups and we just got a subscription for the main production solitary node and did a full dist-upgrade on that, so that's up to all latest 5.4 latest packages. What we are doing is simply replicating a VM to a standby ProxMox newly installed 5.4 Dell chassis and thus we have physically moved the vma.gz dump file to the new local storage of the new ProxMox install and trying to restore the VM fr the GUI..

The complete error reads as thus:
restore vma archive: zcat /mnt/mira2/dump/vzdump-qemu-103-2019_05_25-16_46_23.vma.gz | vma extract -v -r /var/tmp/vzdumptmp17384.fifo - /var/tmp/vzdumptmp17384
CFG: size: 649 name: qemu-server.conf
DEV: dev_id=1 size: 1127428915200 devname: drive-ide1
DEV: dev_id=2 size: 214748364800 devname: drive-ide3
DEV: dev_id=3 size: 49392123904 devname: drive-scsi0
CTIME: Sat May 25 16:46:26 2019
Using default stripesize 64.00 KiB.
no lock found trying to remove 'create' lock
TASK ERROR: command 'set -o pipefail && zcat /mnt/mira2/dump/vzdump-qemu-103-2019_05_25-16_46_23.vma.gz | vma extract -v -r /var/tmp/vzdumptmp17384.fifo - /var/tmp/vzdumptmp17384' failed: lvcreate 'pve/vm-113-disk-0' error: Logical volume pve/data is not a thin pool.
 
More info from individual target node that VM is to be restored on: (NOT a cluster, just a single node)
Machine runs a raid6 as local raid

Code:
# pvs
  PV         VG  Fmt  Attr PSize PFree
  /dev/sda3  pve lvm2 a--  3.64t    0

# vgs
  VG  #PV #LV #SN Attr   VSize VFree
  pve   1   4   0 wz--n- 3.64t    0

# lvs
  LV       VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data     pve -wi-a-----   2.65t
  miraqcow pve -wi-ao---- 900.88g
  root     pve -wi-ao----  96.00g
  swap     pve -wi-ao----   8.00g

# lsblk
NAME             MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                8:0    0   3.7T  0 disk
├─sda1             8:1    0  1007K  0 part
├─sda2             8:2    0   512M  0 part
└─sda3             8:3    0   3.7T  0 part
  ├─pve-swap     253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root     253:1    0    96G  0 lvm  /
  ├─pve-miraqcow 253:2    0 900.9G  0 lvm  /mnt/mira2
  └─pve-data     253:3    0   2.7T  0 lvm
sr0               11:0    1  1024M  0 rom

 /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup

dir: miraprxmx2
        path /mnt/mira2
        content rootdir,iso,backup
        maxfiles 2
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

The pveversion of the individual proxmox node (ie the target node) to restore the VM to:
proxmox-ve: 5.4-1 (running kernel: 4.15.18-12-pve)
pve-manager: 5.4-3 (running version: 5.4-3/0a6eaa62)
pve-kernel-4.15: 5.3-3
pve-kernel-4.15.18-12-pve: 4.15.18-35
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-50
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-41
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-25
pve-cluster: 5.0-36
pve-container: 2.0-37
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-19
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 2.12.1-3
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-50
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2

AND also the pveversion of the node that the VM was obtained from ( it was vzdumped from) ( this node has newest & all latest packages updated)

proxmox-ve: 5.4-1 (running kernel: 4.15.18-14-pve)
pve-manager: 5.4-5 (running version: 5.4-5/c6fdb264)
pve-kernel-4.15: 5.4-2
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-9
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-51
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-42
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-26
pve-cluster: 5.0-37
pve-container: 2.0-37
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-20
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-2
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-51
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
 
For future readers: lvs or lvs -a output should be this:
Code:
lvs -a
  LV              VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve twi-aotz--   2.65t             0.07   0.47
  [data_tdata]    pve Twi-ao----   2.65t
  [data_tmeta]    pve ewi-ao----  88.00m
  [lvol0_pmspare] pve ewi-------  88.00m
Just notice the attributes and compare with my original posting above

Solution:
In the Gui or in /etc/pve/storage.cfg get rid of the data LV
then also
lvremove /dev/pve/data
then recreate it but with the "thin" option like this:
lvcreate --name data -T -l +100%FREE pve
OR if you want a set size instead of the rest of the available space in the pv
lvcreate --name data -T -L 10T pve (this will give 10 Terrabytes to the new thin provisioned LV)

GO back into the GUI, "easiest" or just manually edit /etc/pve/storage.cfg
Code:
lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir
 
For future readers: lvs or lvs -a output should be this:
Code:
lvs -a
  LV              VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve twi-aotz--   2.65t             0.07   0.47
  [data_tdata]    pve Twi-ao----   2.65t
  [data_tmeta]    pve ewi-ao----  88.00m
  [lvol0_pmspare] pve ewi-------  88.00m
Just notice the attributes and compare with my original posting above

Solution:
In the Gui or in /etc/pve/storage.cfg get rid of the data LV
then also
lvremove /dev/pve/data
then recreate it but with the "thin" option like this:
lvcreate --name data -T -l +100%FREE pve
OR if you want a set size instead of the rest of the available space in the pv
lvcreate --name data -T -L 10T pve (this will give 10 Terrabytes to the new thin provisioned LV)

GO back into the GUI, "easiest" or just manually edit /etc/pve/storage.cfg
Code:
lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir
the SOLUTION it will remove the ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!