Hi,
I accidentally deleted one KVM VM and I don't have any backup. Now I'm trying to recover it.
I'm using following procedure to recover.
Step1:
Cloned Proxmox Node 900GB
dd if=/dev/sda of=/mnt/pve/backup/
Step2:
Created a new virtual Proxmox Node restore Proxmox's Drive using dd. New node is same as my original node.
Step3:
Found vg file.
cd /etc/lvm/archive/
I found pve_01253-2142000420.vg is matching
description = "Created *before* executing '/sbin/lvremove -f pve/vm-310-disk-1'"
Step4:
Restored volume group
vgcfgrestore pve --force -f pve_01253-2142000420.vg
Step5:
lvs is showing my vm-310-disk-1 and correct size. Also I can see disk from Proxmox GUI local-lvm
Now I'm trying to create a VM and attach this disk
I'm getting following error. I can not create any disk.
lvcreate 'pve/vm-100-disk-2' error: Thin pool transaction_id is 1258, while expected 1247. (500)
Is there any way I can copy this disk from this node to another node or any fixes fro above error.
root@vps100:~# pveversion
pve-manager/4.4-1/eb2d6f1e (running kernel: 4.4.35-1-pve)
root@vps100:~# pveversion -v
proxmox-ve: 4.4-76 (running kernel: 4.4.35-1-pve)
pve-manager: 4.4-1 (running version: 4.4-1/eb2d6f1e)
pve-kernel-4.4.35-1-pve: 4.4.35-76
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-101
pve-firmware: 1.1-10
libpve-common-perl: 4.0-83
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-70
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.4-1
pve-qemu-kvm: 2.7.0-9
pve-container: 1.0-88
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.6-2
lxcfs: 2.0.5-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
ceph: 0.94.10-1~bpo80+1
root@vps100:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi---tz-- 717.85g
root pve -wi-ao---- 96.00g
snap_vm-103-disk-1_init pve Vri---tz-k 25.00g data vm-103-disk-1
swap pve -wi-ao---- 8.00g
------------
------------
vm-310-disk-1 pve Vwi---tz-- 400.00g data
root@vps100:~# vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2509
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 27
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 837.84 GiB
PE Size 4.00 MiB
Total PE 214487
Alloc PE / Size 210439 / 822.03 GiB
Free PE / Size 4048 / 15.81 GiB
VG UUID XM7Rjg-JBub-GVm4-mNWL-kqzh-xXuR-LguhTD
Thank You.
I accidentally deleted one KVM VM and I don't have any backup. Now I'm trying to recover it.
I'm using following procedure to recover.
Step1:
Cloned Proxmox Node 900GB
dd if=/dev/sda of=/mnt/pve/backup/
Step2:
Created a new virtual Proxmox Node restore Proxmox's Drive using dd. New node is same as my original node.
Step3:
Found vg file.
cd /etc/lvm/archive/
I found pve_01253-2142000420.vg is matching
description = "Created *before* executing '/sbin/lvremove -f pve/vm-310-disk-1'"
Step4:
Restored volume group
vgcfgrestore pve --force -f pve_01253-2142000420.vg
Step5:
lvs is showing my vm-310-disk-1 and correct size. Also I can see disk from Proxmox GUI local-lvm
Now I'm trying to create a VM and attach this disk
I'm getting following error. I can not create any disk.
lvcreate 'pve/vm-100-disk-2' error: Thin pool transaction_id is 1258, while expected 1247. (500)
Is there any way I can copy this disk from this node to another node or any fixes fro above error.
root@vps100:~# pveversion
pve-manager/4.4-1/eb2d6f1e (running kernel: 4.4.35-1-pve)
root@vps100:~# pveversion -v
proxmox-ve: 4.4-76 (running kernel: 4.4.35-1-pve)
pve-manager: 4.4-1 (running version: 4.4-1/eb2d6f1e)
pve-kernel-4.4.35-1-pve: 4.4.35-76
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-101
pve-firmware: 1.1-10
libpve-common-perl: 4.0-83
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-70
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.4-1
pve-qemu-kvm: 2.7.0-9
pve-container: 1.0-88
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.6-2
lxcfs: 2.0.5-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
ceph: 0.94.10-1~bpo80+1
root@vps100:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi---tz-- 717.85g
root pve -wi-ao---- 96.00g
snap_vm-103-disk-1_init pve Vri---tz-k 25.00g data vm-103-disk-1
swap pve -wi-ao---- 8.00g
------------
------------
vm-310-disk-1 pve Vwi---tz-- 400.00g data
root@vps100:~# vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2509
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 27
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 837.84 GiB
PE Size 4.00 MiB
Total PE 214487
Alloc PE / Size 210439 / 822.03 GiB
Free PE / Size 4048 / 15.81 GiB
VG UUID XM7Rjg-JBub-GVm4-mNWL-kqzh-xXuR-LguhTD
Thank You.