Thin Pool `vg_hdd-data-tpool` stuck with open count 1, preventing volume deactivation and reboot

Artsiom

New Member
Jan 9, 2026
1
0
1
Hello Proxmox community,

I’m experiencing a problem with one of my LVM thin pools (`vg_hdd-data-tpool`) on my Proxmox server. Despite stopping all VMs and containers using the `vg_hdd` volume group and unmounting all related logical volumes, the thin pool device remains active with an open count of 1. This prevents me from deactivating the volume and complicates server reboot or maintenance.

### Environment:

* Proxmox VE version: 8.4.0
* LVM thin pool: vg_hdd-data-tpool
* Volume group: vg_hdd
* Storage device: /dev/sdb

### What I have done:

1. Stopped all VMs and containers:

* Verified no running QEMU/KVM processes (`ps aux | grep kvm`)
* Verified no running LXC containers (`pct list` shows stopped)

2. Checked device usage:

* `lsof /dev/mapper/vg_hdd-data-tpool` shows no processes using it
* `fuser -v /dev/mapper/vg_hdd-data-tpool` shows no active processes

3. Checked dmsetup info:

```
dmsetup info vg_hdd-data-tpool
Name: vg_hdd-data-tpool
State: ACTIVE
Open count: 1
```

4. Attempted to deactivate logical volumes:

* `lvchange -an vg_hdd/data` returns error: "Attempted to decrement suspended device counter below zero."
* `lvchange -an vg_hdd/data-tpool` returns: "Failed to find logical volume"

5. Tried to stop `dmeventd` daemon (PID 930), but the thin pool remains active.

6. Rebooted the server, but the thin pool still shows an open count of 1 on boot.

7. Thin pool metadata shows errors suggesting to run `thin_check` with `--repair`, but:

* `thin_check /dev/vg_hdd/data` returns "Couldn't stat path"
* The block device path might be different or not accessible in normal mode.

### Additional info:

* No multipath configuration is in use.
* `lvs -a -o +devices` shows the pool is active and linked to the correct devices.
* No snapshots or dependent volumes appear to be active.
* No other processes (besides kernel workers and system daemons) seem to be holding the device.

### Problem summary:

The thin pool device remains active with an open count of 1 despite no visible usage, preventing logical volume deactivation or clean unmount. This blocks maintenance operations and complicates reboots.

### Request:

Could anyone advise how to safely identify what is holding this open count?
Is there a recommended procedure to force deactivate or repair the thin pool without risking data loss?
Would booting into rescue mode and running `thin_check --repair` be the safest next step?

Any insights or suggestions to resolve this stuck open count issue on the thin pool would be greatly appreciated.

Thank you in advance!