Removing LVM VG

uaudith

New Member
Jul 11, 2023
2
0
1
Recently I saw a buffer io error on /dev/dm-xx error on the terminal in the host.
Running dmsetup info /dev/dm-xx shows that it corresponds to a previous VG
And VGS does not show anything about this volume group.
Next, I tried to remove them by dmsetup remove -f /dev/dm-0 /dev/dm-13
And it gave following error
Bash:
device-mapper: remove ioctl on raid0_lvmThinPool-raid0_lvmThinPool_tmeta  failed: Device or resource busy
Command failed.

Here is the output of vgs
Bash:
root@proxmox:~# dmsetup info /dev/dm-0
Name:              raid0_lvmThinPool-raid0_lvmThinPool_tmeta
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 0
Number of targets: 1
UUID: LVM-ahnisMfKYiF98Mn0IbpE5nHa5PWsc2dIkeC31S5UmHbvBp13IDSG1nvdnB8xJA1y-tmeta

root@proxmox:~# vgs
  VG                 #PV #LV #SN Attr   VSize    VFree
  pve                  1   3   0 wz--n- <446.57g <16.00g
  raid5_lvmThinPool    1   2   0 wz--n-   17.46t 512.00m
  raid5_lvmThinPool2   1   2   0 wz--n-   17.46t 512.00m

And the output of dmsetup ls
Bash:
root@proxmox:~# dmsetup ls
pve-data        (253:8)
pve-data_tdata  (253:7)
pve-data_tmeta  (253:6)
pve-root        (253:5)
pve-swap        (253:4)
raid0_lvmThinPool-raid0_lvmThinPool-tpool       (253:13)
raid0_lvmThinPool-raid0_lvmThinPool_tdata       (253:2)
raid0_lvmThinPool-raid0_lvmThinPool_tmeta       (253:0)
raid5_lvmThinPool-raid5_lvmThinPool     (253:10)
raid5_lvmThinPool-raid5_lvmThinPool-tpool       (253:9)
raid5_lvmThinPool-raid5_lvmThinPool_tdata       (253:3)
raid5_lvmThinPool-raid5_lvmThinPool_tmeta       (253:1)
raid5_lvmThinPool-vm--101--disk--0      (253:12)
raid5_lvmThinPool2-raid5_lvmThinPool2   (253:18)
raid5_lvmThinPool2-raid5_lvmThinPool2-tpool     (253:17)
raid5_lvmThinPool2-raid5_lvmThinPool2_tdata     (253:15)
raid5_lvmThinPool2-raid5_lvmThinPool2_tmeta     (253:11)
raid5_lvmThinPool2-vm--100--disk--0     (253:19)

I tried using lsof to find the processes that are using these volumes which might be giving the Device or resource busy error.
so lsof | grep 253,13 failed with following error

Bash:
lsof: no pwd entry for UID 101000
lsof: no pwd entry for UID 101000
lsof: no pwd entry for UID 101000

The VG that I am trying to remove has previously been assigned to vm101, now that VM is not running. but a different container is running with that id now
 
Have you tried to reboot after editing?

can you give me the out put of lsblk , pvs and lvs
 
Hi Philipp,
I did not try rebooting. Maybe that will solve this. I'll have to wait a few more days to restart though.

Bash:
root@proxmox:~# lsblk
NAME                                            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                               8:0    0  17.5T  0 disk
├─raid5_lvmThinPool-raid5_lvmThinPool_tmeta     253:1    0  15.8G  0 lvm
│ └─raid5_lvmThinPool-raid5_lvmThinPool-tpool   253:9    0  17.4T  0 lvm
│   ├─raid5_lvmThinPool-raid5_lvmThinPool       253:10   0  17.4T  1 lvm
│   └─raid5_lvmThinPool-vm--101--disk--0        253:12   0     5T  0 lvm
└─raid5_lvmThinPool-raid5_lvmThinPool_tdata     253:3    0  17.4T  0 lvm
  └─raid5_lvmThinPool-raid5_lvmThinPool-tpool   253:9    0  17.4T  0 lvm
    ├─raid5_lvmThinPool-raid5_lvmThinPool       253:10   0  17.4T  1 lvm
    └─raid5_lvmThinPool-vm--101--disk--0        253:12   0     5T  0 lvm
sdb                                               8:16   0 447.1G  0 disk
├─sdb1                                            8:17   0  1007K  0 part
├─sdb2                                            8:18   0   512M  0 part /boot/efi
└─sdb3                                            8:19   0 446.6G  0 part
  ├─pve-swap                                    253:4    0     8G  0 lvm  [SWAP]
  ├─pve-root                                    253:5    0    96G  0 lvm  /
  ├─pve-data_tmeta                              253:6    0   3.3G  0 lvm
  │ └─pve-data                                  253:8    0   320G  0 lvm
  └─pve-data_tdata                              253:7    0   320G  0 lvm
    └─pve-data                                  253:8    0   320G  0 lvm
sdd                                               8:48   0  17.5T  0 disk
├─raid5_lvmThinPool2-raid5_lvmThinPool2_tmeta   253:11   0  15.8G  0 lvm
│ └─raid5_lvmThinPool2-raid5_lvmThinPool2-tpool 253:17   0  17.4T  0 lvm
│   ├─raid5_lvmThinPool2-raid5_lvmThinPool2     253:18   0  17.4T  1 lvm
│   └─raid5_lvmThinPool2-vm--100--disk--0       253:19   0     5T  0 lvm
└─raid5_lvmThinPool2-raid5_lvmThinPool2_tdata   253:15   0  17.4T  0 lvm
  └─raid5_lvmThinPool2-raid5_lvmThinPool2-tpool 253:17   0  17.4T  0 lvm
    ├─raid5_lvmThinPool2-raid5_lvmThinPool2     253:18   0  17.4T  1 lvm
    └─raid5_lvmThinPool2-vm--100--disk--0       253:19   0     5T  0 lvm
raid0_lvmThinPool-raid0_lvmThinPool_tmeta       253:0    0  15.8G  0 lvm
└─raid0_lvmThinPool-raid0_lvmThinPool-tpool     253:13   0  23.3T  0 lvm

Bash:
root@proxmox:~# pvs
  PV         VG                 Fmt  Attr PSize    PFree
  /dev/sda   raid5_lvmThinPool  lvm2 a--    17.46t 512.00m
  /dev/sdb3  pve                lvm2 a--  <446.57g <16.00g
  /dev/sdd   raid5_lvmThinPool2 lvm2 a--    17.46t 512.00m

Bash:
root@proxmox:~# lvs
  LV                 VG                 Attr       LSize   Pool               Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data               pve                twi-a-tz-- 320.03g                           0.00   0.52
  root               pve                -wi-ao----  96.00g
  swap               pve                -wi-ao----   8.00g
  raid5_lvmThinPool  raid5_lvmThinPool  twi-aotz--  17.43t                           4.08   1.08
  vm-101-disk-0      raid5_lvmThinPool  Vwi-aotz--   5.00t raid5_lvmThinPool         14.23
  raid5_lvmThinPool2 raid5_lvmThinPool2 twi-aotz--  17.43t                           0.35   0.39
  vm-100-disk-0      raid5_lvmThinPool2 Vwi-aotz--   5.00t raid5_lvmThinPool2        1.24
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!