Hi there,
Today I move my proxmox installation from a SATA SSD to a NVMe SSD. Then I plugged the SATA SSD into a linux laptop via a SATA to USB adapter and tried to wipe the drive via gparted. But gparted told me, that there is a active LVM group and that wiping the drive is not recommand.
I already tried some commands, but I ran into issues.
This is shown, when I run lvdisplay:
And the lvs -a command
When I try to remove one of the logical volumes, I get an error:
When I try to repair it, this happens:
I don't care about the data on the disk. How can I repair the disk?
I guess i broke the thin pool by trying to repair it :facepalm:
Thanks!
Today I move my proxmox installation from a SATA SSD to a NVMe SSD. Then I plugged the SATA SSD into a linux laptop via a SATA to USB adapter and tried to wipe the drive via gparted. But gparted told me, that there is a active LVM group and that wiping the drive is not recommand.
I already tried some commands, but I ran into issues.
This is shown, when I run lvdisplay:
Code:
root@linux:~# lvdisplay pve
WARNING: Not using lvmetad because a repair command was run.
--- Logical volume ---
LV Name data
VG Name pve
LV UUID wbkkUw-5Aqa-THa3-Ww4D-8qhG-Peb1-3uh2mE
LV Write Access read/write
LV Creation host, time proxmox, 2020-10-29 13:48:20 +0100
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status NOT available
LV Size <141.43 GiB
Current LE 36206
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/pve/vm-100-disk-2
LV Name vm-100-disk-2
VG Name pve
LV UUID Kdx0Ta-hIzi-Kim3-80vX-ZtaF-82XI-6HnlDU
LV Write Access read/write
LV Creation host, time pve, 2020-10-29 14:04:30 +0100
LV Pool name data
LV Status NOT available
LV Size 32.00 GiB
Current LE 8192
Segments 1
Allocation inherit
Read ahead sectors auto
....
And the lvs -a command
Code:
root@linux:~# lvs -a
WARNING: Not using lvmetad because a repair command was run.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi---tz-- <141.43g
[data_tdata] pve Twi------- <141.43g
[data_tmeta] pve ewi------- <1.45g
[lvol0_pmspare] pve ewi------- <1.45g
snap_vm-102-disk-0_Default pve Vri---tz-k 60.00g data vm-102-disk-0
snap_vm-201-disk-0_Working pve Vri---tz-k 8.00g data vm-201-disk-0
snap_vm-202-disk-0_Working pve Vri---tz-k 8.00g data vm-202-disk-0
vm-100-disk-2 pve Vwi---tz-- 32.00g data
vm-101-disk-0 pve Vwi---tz-- 32.00g data
vm-102-disk-0 pve Vwi---tz-- 60.00g data
vm-200-disk-1 pve Vwi---tz-- 8.00g data
vm-201-disk-0 pve Vwi---tz-- 8.00g data
vm-202-disk-0 pve Vwi---tz-- 8.00g data
vm-203-disk-0 pve Vwi---tz-- 8.00g data
vm-204-disk-0 pve Vwi---tz-- 8.00g data
vm-205-disk-0 pve Vwi---tz-- 8.00g data
vm-206-disk-0 pve Vwi---tz-- 8.00g data
vm-207-disk-0 pve Vwi---tz-- 8.00g data
vm-208-disk-0 pve Vwi---tz-- 8.00g data
vm-209-disk-0 pve Vwi---tz-- 8.00g data
vm-210-disk-0 pve Vwi---tz-- 8.00g data
vm-211-disk-0 pve Vwi---tz-- 8.00g data
vm-212-disk-0 pve Vwi---tz-- 8.00g data
vm-300-disk-0 pve Vwi---tz-- 8.00g data
When I try to remove one of the logical volumes, I get an error:
Code:
root@linux:~# lvremove /dev/pve/vm-100-disk-2
WARNING: Not using lvmetad because a repair command was run.
Do you really want to remove and DISCARD logical volume pve/vm-100-disk-2? [y/n]: y
/usr/sbin/thin_check: execvp failed: No such file or directory
Check of pool pve/data failed (status:2). Manual repair required!
Failed to update pool pve/data.
When I try to repair it, this happens:
Code:
root@linux:~# lvconvert --repair pve/data
WARNING: Disabling lvmetad cache for repair command.
WARNING: Not using lvmetad because of repair.
/usr/sbin/thin_repair: execvp failed: No such file or directory
Repair of thin metadata volume of thin pool pve/data failed (status:2). Manual repair required!
I don't care about the data on the disk. How can I repair the disk?
I guess i broke the thin pool by trying to repair it :facepalm:
Thanks!
Last edited: