mark question offline node and disk hangs on lvcreate/disk move

Zaman

Well-Known Member
Apr 15, 2019
83
3
48
31
Hello,
when try to disk move to another Storage the task stuck on 0% and when cancel it then the web GUI goes offline and after check `/var/lock/lvm` is locked P_global V_thin2
Code:
# ls /var/lock/lvm
P_global  V_thin2
/var/lock/lvm# lsof P_global                                                                                                              
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF  NODE NAME                                                                                          
vgs     319968 root    3uR  REG   0,27        0 91195 P_global                                                                                      
vgs     321248 root    3uR  REG   0,27        0 91195 P_global                                                                                      
vgs     321252 root    3uR  REG   0,27        0 91195 P_global    
# ps aux | grep 319919                                                                                                        
root      319919  0.0  0.0  23628 20900 ?        S    10:08   0:00 /sbin/lvcreate -aly -V 1048576000k --name vm-124-disk-1 --thinpool thin2/thin2    
root      323214  0.0  0.0   6180   664 pts/1    S+   10:29   0:00 grep 319919  
# ps aux | grep 319968
ps aux | grep 321248
ps aux | grep 321252
root      319968  0.0  0.0  23528  9716 ?        S    10:08   0:00 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
root      327375  0.0  0.0   6180   664 pts/1    S+   10:56   0:00 grep 319968
root      321248  0.0  0.0  23528  9712 ?        S    10:16   0:00 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
root      327377  0.0  0.0   6180   664 pts/1    S+   10:56   0:00 grep 321248
root      321252  0.0  0.0  23528  9556 ?        S    10:16   0:00 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
root      327379  0.0  0.0   6180   664 pts/1    S+   10:56   0:00 grep 321252


# cat /proc/319919/stack                                                                                                      
[<0>] __do_semtimedop+0x54e/0x12f0                                                                                                                  
[<0>] do_semtimedop+0xe0/0x180                                                                                                                      
[<0>] __x64_sys_semtimedop+0x8e/0xa0                                                                                                                
[<0>] do_syscall_64+0x59/0xc0                                                                                                                        
[<0>] entry_SYSCALL_64_after_hwframe+0x61/0xcb  

# ls -al /proc/*/fd/* 2>/dev/null |grep thin2                                                                                
lrwx------ 1 root         root         64 Sep 18 10:08 /proc/319919/fd/8 -> /run/lock/lvm/V_thin2                                                    
lrwx------ 1 root         root         64 Sep 18 10:14 /proc/319968/fd/8 -> /run/lock/lvm/V_thin2                                                    
lrwx------ 1 root         root         64 Sep 18 10:19 /proc/321248/fd/8 -> /run/lock/lvm/V_thin2                                                    
lrwx------ 1 root         root         64 Sep 18 10:19 /proc/321252/fd/8 -> /run/lock/lvm/V_thin2

/var/lock/lvm# lsof V_thin2                                                                                                                
COMMAND     PID USER   FD   TYPE DEVICE SIZE/OFF  NODE NAME                                                                                          
lvcreate 319919 root    8uW  REG   0,27        0 91193 V_thin2                                                                                      
vgs      319968 root    8u   REG   0,27        0 91193 V_thin2                                                                                      
vgs      321248 root    8u   REG   0,27        0 91193 V_thin2                                                                                      
vgs      321252 root    8u   REG   0,27        0 91193 V_thin2    

# fuser V_thin2
/run/lock/lvm/V_thin2: 319919 319968 321248 321252
# lvdisplay -m thin2/thin2
  --- Logical volume ---
  LV Name                thin2
  VG Name                thin2
  LV UUID                CjvVRl-ZeSm
  LV Write Access        read/write (activated read only)
  LV Creation host, time khv1, 2023-09-18 09:52:09 +0300
  LV Pool metadata       thin2_tmeta
  LV Pool data           thin2_tdata
  LV Status              available
  # open                 0
  LV Size                <1.73 TiB
  Allocated pool data    0.48%
  Allocated metadata     1.43%
  Current LE             452375
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     2048
  Block device           253:7
   
  --- Segments ---
  Logical extents 0 to 452374:
    Type                thin-pool
    Monitoring          monitored
    Chunk size          64.00 KiB
    Discards            passdown
    Thin count          2
    Transaction ID      2
    Zero new blocks     yes   
# lvdisplay -m pve/data
  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                wxexwd-fBx1
  LV Write Access        read/write (activated read only)
  LV Creation host, time proxmox, 2022-06-21 10:32:32 +0300
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                <758.25 GiB
  Allocated pool data    96.16%
  Allocated metadata     2.67%
  Current LE             194111
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:12
   
  --- Segments ---
  Logical extents 0 to 194110:
    Type        thin-pool
    Monitoring        monitored
    Chunk size        64.00 KiB
    Discards        passdown
    Thin count        28
    Transaction ID    442
    Zero new blocks    yes
# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.39-4-pve)
pve-manager: 7.2-7 (running version: 7.2-7/d0dd0e85)
pve-kernel-5.15: 7.2-9
pve-kernel-helper: 7.2-9
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-2-pve: 5.15.35-5
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph: 16.2.13-pve1
ceph-fuse: 16.2.13-pve1
corosync: 3.1.5-pve2 
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1       
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-8
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
openvswitch-switch: 2.15.0+ds1-2+deb11u4
proxmox-backup-client: 2.2.5-1
proxmox-backup-file-restore: 2.2.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-5
pve-firmware: 3.5-1
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1
# vgs
^C  Interrupted...
  Giving up waiting for lock.
  Can't get lock for thin2.
  Cannot process volume group thin2
  Interrupted...
# lvs
^C  Interrupted...
  Giving up waiting for lock.
  Can't get lock for thin2.
  Cannot process volume group thin2
  Interrupted...
# ps aux |grep 319919
ps aux |grep 319968
ps aux |grep 321248
ps aux |grep 321252
ps aux |grep 336244
root      319919  0.0  0.0  23628 20900 ?        S    10:08   0:00 /sbin/lvcreate -aly -V 1048576000k --name vm-124-disk-1 --thinpool thin2/thin2
root      342713  0.0  0.0   6180   664 pts/1    R+   12:34   0:00 grep 319919
root      319968  0.0  0.0  23528  9716 ?        S    10:08   0:00 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
root      342715  0.0  0.0   6180   668 pts/1    R+   12:34   0:00 grep 319968
root      321248  0.0  0.0  23528  9712 ?        S    10:16   0:00 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
root      342717  0.0  0.0   6180   728 pts/1    R+   12:34   0:00 grep 321248
root      321252  0.0  0.0  23528  9556 ?        S    10:16   0:00 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
root      342719  0.0  0.0   6180   720 pts/1    R+   12:34   0:00 grep 321252
root      336244  0.0  0.0  23528  9640 ?        S    11:52   0:00 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
root      342721  0.0  0.0   6180   728 pts/1    R+   12:34   0:00 grep 336244
 
Hi, it seems that the lvcreate still holds the global lock and is blocking further access. Try killing the process with PID 319919 and check if you are able to manually run the command /sbin/lvcreate -aly -V 1048576000k --name vm-124-disk-1 --thinpool thin2/thin2.

P.S. please also upgrade to the latest available Proxmox VE versions 7.4 or 8.0, 7.2 is already EOL.
 
  • Like
Reactions: Zaman
Hi, it seems that the lvcreate still holds the global lock and is blocking further access. Try killing the process with PID 319919 and check if you are able to manually run the command /sbin/lvcreate -aly -V 1048576000k --name vm-124-disk-1 --thinpool thin2/thin2.

P.S. please also upgrade to the latest available Proxmox VE versions 7.4 or 8.0, 7.2 is already EOL.
show that the drive already exist, i will try to upgrade the system hope to be solved..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!