[SOLVED] Can't destroy ZVOL from pool "dataset is busy" -> Solution: LVM picked up on VG/PV inside. Need "filter" in lvm.conf

apoc

Famous Member
Oct 13, 2017
1,050
169
108
Hello all,

hope someone can give me a hint because I am going nuts. I have upgraded my backup-server to new gear, also using a ZPOOL now.
My plan is to zfs-send / zfs-receive the VM-disks (zvols) from time to time to this box from my Proxmox-host.

Started to "proof of concept" my approach. Here is what I was doing:
- creating a ZFS snapshot on the Proxmox source
- Sending the ZFS snapshot via SSH to the Backup-Server (Ubuntu 20.04)
- destroying the ZFS snapshot on the Proxmox source.

It seemed to work quite well in general. At least until the point where I tried to cleanup the ZVOL on the receiving side and do some automation / scripting (bash).
Whatever I do I can't delete that target ZVOL. System always states that the dataset is busy.

I have spent 3 evenings now trying to find a solution. Went back and forth through the WWW but nothing I have researched worked.
Today I have even uninstalled multipathd on the box (as blacklisting wasn't showing positive results) but that also doesn't work.
I have no mountpoints. I have no indications what uses the ZVOL dataset. The only thing I can say: all snapshots are gone ("zfs list -t snapshot" is empty) and the ZVOL occupies some space:
NAME USED AVAIL REFER MOUNTPOINT HDD-POOL-RAIDZ2 7,6T 4,6T 192K /HDD-POOL-RAIDZ2 ... HDD-POOL-RAIDZ2/vm-1012-disk-0 20,6G 4,6T 5,09G -

I am aware that this might not relate to Proxmox itself but I hope someone from the community jump in and help me out here.
Thanks a ton upfront for any indicators
 
Last edited:
Is there still a replication going on? And check the properties of the zvol, by using zfs get all HDD-POOL-RAIDZ2/vm-1012-disk-0.
 
Thanks for your response Alwin.
Replication is (as far as I can say) finalized. I even left the systems 24h online to see if there are some things pending.

I have checked the output of the command you mentioned yesterday evening as well, but have not found anything which raised the bell. Here it is:
Code:
#sudo zfs get all HDD-POOL-RAIDZ2/vm-1012-disk-0
NAME                            PROPERTY              VALUE                 SOURCE
HDD-POOL-RAIDZ2/vm-1012-disk-0  type                  volume                -
HDD-POOL-RAIDZ2/vm-1012-disk-0  creation              Mi Dez 16 10:56 2020  -
HDD-POOL-RAIDZ2/vm-1012-disk-0  used                  20,6G                 -
HDD-POOL-RAIDZ2/vm-1012-disk-0  available             4,6T                  -
HDD-POOL-RAIDZ2/vm-1012-disk-0  referenced            5,09G                 -
HDD-POOL-RAIDZ2/vm-1012-disk-0  compressratio         1.00x                 -
HDD-POOL-RAIDZ2/vm-1012-disk-0  reservation           none                  default
HDD-POOL-RAIDZ2/vm-1012-disk-0  volsize               20G                   local
HDD-POOL-RAIDZ2/vm-1012-disk-0  volblocksize          8K                    default
HDD-POOL-RAIDZ2/vm-1012-disk-0  checksum              on                    default
HDD-POOL-RAIDZ2/vm-1012-disk-0  compression           off                   default
HDD-POOL-RAIDZ2/vm-1012-disk-0  readonly              off                   default
HDD-POOL-RAIDZ2/vm-1012-disk-0  createtxg             90911                 -
HDD-POOL-RAIDZ2/vm-1012-disk-0  copies                1                     default
HDD-POOL-RAIDZ2/vm-1012-disk-0  refreservation        20,6G                 received
HDD-POOL-RAIDZ2/vm-1012-disk-0  guid                  12541056344737044756  -
HDD-POOL-RAIDZ2/vm-1012-disk-0  primarycache          all                   default
HDD-POOL-RAIDZ2/vm-1012-disk-0  secondarycache        all                   default
HDD-POOL-RAIDZ2/vm-1012-disk-0  usedbysnapshots       0B                    -
HDD-POOL-RAIDZ2/vm-1012-disk-0  usedbydataset         5,09G                 -
HDD-POOL-RAIDZ2/vm-1012-disk-0  usedbychildren        0B                    -
HDD-POOL-RAIDZ2/vm-1012-disk-0  usedbyrefreservation  15,5G                 -
HDD-POOL-RAIDZ2/vm-1012-disk-0  logbias               latency               default
HDD-POOL-RAIDZ2/vm-1012-disk-0  objsetid              10                    -
HDD-POOL-RAIDZ2/vm-1012-disk-0  dedup                 off                   default
HDD-POOL-RAIDZ2/vm-1012-disk-0  mlslabel              none                  default
HDD-POOL-RAIDZ2/vm-1012-disk-0  sync                  standard              default
HDD-POOL-RAIDZ2/vm-1012-disk-0  refcompressratio      1.00x                 -
HDD-POOL-RAIDZ2/vm-1012-disk-0  written               5,09G                 -
HDD-POOL-RAIDZ2/vm-1012-disk-0  logicalused           2,54G                 -
HDD-POOL-RAIDZ2/vm-1012-disk-0  logicalreferenced     2,54G                 -
HDD-POOL-RAIDZ2/vm-1012-disk-0  volmode               default               default
HDD-POOL-RAIDZ2/vm-1012-disk-0  snapshot_limit        none                  default
HDD-POOL-RAIDZ2/vm-1012-disk-0  snapshot_count        none                  default
HDD-POOL-RAIDZ2/vm-1012-disk-0  snapdev               hidden                default
HDD-POOL-RAIDZ2/vm-1012-disk-0  context               none                  default
HDD-POOL-RAIDZ2/vm-1012-disk-0  fscontext             none                  default
HDD-POOL-RAIDZ2/vm-1012-disk-0  defcontext            none                  default
HDD-POOL-RAIDZ2/vm-1012-disk-0  rootcontext           none                  default
HDD-POOL-RAIDZ2/vm-1012-disk-0  redundant_metadata    all                   default
HDD-POOL-RAIDZ2/vm-1012-disk-0  encryption            off                   default
HDD-POOL-RAIDZ2/vm-1012-disk-0  keylocation           none                  default
HDD-POOL-RAIDZ2/vm-1012-disk-0  keyformat             none                  default
HDD-POOL-RAIDZ2/vm-1012-disk-0  pbkdf2iters           0                     default
 
What message does appear, when you try to destroy the zvol? And lsof /dev/HDD-POOL-RAIDZ2/vm-1012-disk-0 has something?
 
What message does appear, when you try to destroy the zvol?
It just says cannot destroy 'HDD-POOL-RAIDZ2/vm-1012-disk-0': dataset is busy
"zfs list -t snapshot" does not show anything at all. Same error message appears when I use the destroy -r or even -Rf option.

And lsof /dev/HDD-POOL-RAIDZ2/vm-1012-disk-0 has something?
No nothing at all.
I also tried to remove "holds", "mountpoints" etc. but in this case the system reports something similar to "not applicable to this type of dataset (ZVOL)"

I have found very old Github-references about some race-conditions (for instance here: https://github.com/openzfs/zfs/issues/1810) but that
a, doesn't apply
b, doesn't help

As said I have found references about Multipathd creating issues with ZVOL snaps, but have removed that option by adding a "blacklist" to /etc/multipath.conf:
Code:
#>>>>>
defaults {
    user_friendly_names yes
}

blacklist {
    # Do not scan ZFS zvols (to avoid problems on ZFS zvols snapshots)
    devnode "^zd[0-9]*"
    #devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st|zd)[0-9]*"
    #devnode "^(td|ha)d[a-z]"
}

But that didn't help. So I have uninstalled mutlipath-tools completely. But that didn't move me forward as well.

The ZVOL isn't mounted.
I can't find it in /proc (which often was referenced).
I am kind of stuck :(
 
Well, did you restart the node and try then?
 
I have restarted the receiver side multiple times.
Is there any point in doing this on the sender side as well?
 
Do you use the zfs send/receive manually or do you run a storage replication job?
 
I am doing that manual.
Reason for that is that I want to replicate individual disks (only the boot-disks) and I understood that pve-zync uses the whole VM.
/edit: Hence I mentioned that I don't think this is necessarily related to Proxmox itself. But I really appreciate your help!!!
 
Reason for that is that I want to replicate individual disks (only the boot-disks) and I understood that pve-zync uses the whole VM.
Well, the pve-zsync and the storage replication are a little different. You can specify on the disk config that it should be skipped by replication or not.

I am doing that manual.
Then please check that there is no send/receive running or stuck.

Does a rename work? And are there any bookmarks?
 
ok. might need to look into pve sync options again.
There is defintely no sync job hanging.
Interestingly I can rename the dataset on the destination (the one I would like to destroy), however still destroy operation is blocked as before.
This is so odd. Another reboot (after rename) also did not change the situation.

/edit: forgot to mention. I yesterday browsed through all syslog files as well, however I could not pick up anything suspicious.
There were no error-messages or such.
It seems the system behaves as expected (from the system perspective) - however it does not match my expectatioin :rolleyes: :mad:
 
Last edited:
Weird. Can you post a get all from the pool and from the parent zfs dataset?
 
Lets try a different angle:
Can you please comment on that question: What kind of modifications are included in Proxmox to prevent such issues?
I have read from the Multipathd which can block ZVOLs (or their snapshots) and hence needs blacklisting. Anything else?
 
Can you please comment on that question: What kind of modifications are included in Proxmox to prevent such issues?
To prevent locking or to make it happen? In either case it's ZFSs doing, not any case handling. But I can't eliminate the issue that in some constellations this might happen.

I have read from the Multipathd which can block ZVOLs (or their snapshots) and hence needs blacklisting. Anything else?
The multipath output might show something like that and you blacklisted and removed multipath. But now, as we speak about it, the disks where the zvol is located, isn't coming from a external storage, is it?
 
To prevent locking or to make it happen?
I was thinking what else would potentially do a locking and what configuration is included into Proxmox to prevent a symptom what I am seeing (e.g. dataset is busy).


But now, as we speak about it, the disks where the zvol is located, isn't coming from a external storage, is it?
No it is coming from local SAS/SATA-attached storage (LSI9211-8i to be precise).
I was just thinking if multipathd can have these side-effects, what else could cause this? And from that the question arose: What is included in Proxmox configuration which my target-server (Ubuntu) lacks?
 
I was just thinking if multipathd can have these side-effects, what else could cause this? And from that the question arose: What is included in Proxmox configuration which my target-server (Ubuntu) lacks?
What ZFS version do the PVE and the Ubuntu server use?

I was thinking what else would potentially do a locking and what configuration is included into Proxmox to prevent a symptom what I am seeing (e.g. dataset is busy).
Can you please post a zfs get all <pool> and a zpool get all <pool>?
 
ZFS VERSIONS
sender:

zfs-0.8.5-pve1
zfs-kmod-0.8.5-pve1
receiver:
zfs-0.8.3-1ubuntu12.5
zfs-kmod-0.8.3-1ubuntu12.5

Code:
#sudo zfs get all HDD-POOL-RAIDZ2
NAME             PROPERTY              VALUE                 SOURCE
HDD-POOL-RAIDZ2  type                  filesystem            -
HDD-POOL-RAIDZ2  creation              Fr Dez  4 21:10 2020  -
HDD-POOL-RAIDZ2  used                  7,6T                  -
HDD-POOL-RAIDZ2  available             4,6T                  -
HDD-POOL-RAIDZ2  referenced            192K                  -
HDD-POOL-RAIDZ2  compressratio         1.00x                 -
HDD-POOL-RAIDZ2  mounted               yes                   -
HDD-POOL-RAIDZ2  quota                 none                  default
HDD-POOL-RAIDZ2  reservation           none                  default
HDD-POOL-RAIDZ2  recordsize            128K                  default
HDD-POOL-RAIDZ2  mountpoint            /HDD-POOL-RAIDZ2      default
HDD-POOL-RAIDZ2  sharenfs              off                   default
HDD-POOL-RAIDZ2  checksum              on                    default
HDD-POOL-RAIDZ2  compression           off                   default
HDD-POOL-RAIDZ2  atime                 on                    default
HDD-POOL-RAIDZ2  devices               on                    default
HDD-POOL-RAIDZ2  exec                  on                    default
HDD-POOL-RAIDZ2  setuid                on                    default
HDD-POOL-RAIDZ2  readonly              off                   default
HDD-POOL-RAIDZ2  zoned                 off                   default
HDD-POOL-RAIDZ2  snapdir               hidden                default
HDD-POOL-RAIDZ2  aclinherit            restricted            default
HDD-POOL-RAIDZ2  createtxg             1                     -
HDD-POOL-RAIDZ2  canmount              on                    default
HDD-POOL-RAIDZ2  xattr                 on                    default
HDD-POOL-RAIDZ2  copies                1                     default
HDD-POOL-RAIDZ2  version               5                     -
HDD-POOL-RAIDZ2  utf8only              off                   -
HDD-POOL-RAIDZ2  normalization         none                  -
HDD-POOL-RAIDZ2  casesensitivity       sensitive             -
HDD-POOL-RAIDZ2  vscan                 off                   default
HDD-POOL-RAIDZ2  nbmand                off                   default
HDD-POOL-RAIDZ2  sharesmb              off                   default
HDD-POOL-RAIDZ2  refquota              none                  default
HDD-POOL-RAIDZ2  refreservation        none                  default
HDD-POOL-RAIDZ2  guid                  14296966867373421728  -
HDD-POOL-RAIDZ2  primarycache          all                   default
HDD-POOL-RAIDZ2  secondarycache        all                   default
HDD-POOL-RAIDZ2  usedbysnapshots       0B                    -
HDD-POOL-RAIDZ2  usedbydataset         192K                  -
HDD-POOL-RAIDZ2  usedbychildren        7,6T                  -
HDD-POOL-RAIDZ2  usedbyrefreservation  0B                    -
HDD-POOL-RAIDZ2  logbias               latency               default
HDD-POOL-RAIDZ2  objsetid              54                    -
HDD-POOL-RAIDZ2  dedup                 off                   default
HDD-POOL-RAIDZ2  mlslabel              none                  default
HDD-POOL-RAIDZ2  sync                  standard              default
HDD-POOL-RAIDZ2  dnodesize             legacy                default
HDD-POOL-RAIDZ2  refcompressratio      1.00x                 -
HDD-POOL-RAIDZ2  written               192K                  -
HDD-POOL-RAIDZ2  logicalused           6,8T                  -
HDD-POOL-RAIDZ2  logicalreferenced     42K                   -
HDD-POOL-RAIDZ2  volmode               default               default
HDD-POOL-RAIDZ2  filesystem_limit      none                  default
HDD-POOL-RAIDZ2  snapshot_limit        none                  default
HDD-POOL-RAIDZ2  filesystem_count      none                  default
HDD-POOL-RAIDZ2  snapshot_count        none                  default
HDD-POOL-RAIDZ2  snapdev               hidden                default
HDD-POOL-RAIDZ2  acltype               off                   default
HDD-POOL-RAIDZ2  context               none                  default
HDD-POOL-RAIDZ2  fscontext             none                  default
HDD-POOL-RAIDZ2  defcontext            none                  default
HDD-POOL-RAIDZ2  rootcontext           none                  default
HDD-POOL-RAIDZ2  relatime              off                   default
HDD-POOL-RAIDZ2  redundant_metadata    all                   default
HDD-POOL-RAIDZ2  overlay               off                   default
HDD-POOL-RAIDZ2  encryption            off                   default
HDD-POOL-RAIDZ2  keylocation           none                  default
HDD-POOL-RAIDZ2  keyformat             none                  default
HDD-POOL-RAIDZ2  pbkdf2iters           0                     default
HDD-POOL-RAIDZ2  special_small_blocks  0                     default
    


#sudo zpool get all HDD-POOL-RAIDZ2
NAME             PROPERTY                       VALUE                          SOURCE
HDD-POOL-RAIDZ2  size                           5,5T                           -
HDD-POOL-RAIDZ2  capacity                       38%                            -
HDD-POOL-RAIDZ2  altroot                        -                              default
HDD-POOL-RAIDZ2  health                         ONLINE                         -
HDD-POOL-RAIDZ2  guid                           10228772111273785072           -
HDD-POOL-RAIDZ2  version                        -                              default
HDD-POOL-RAIDZ2  bootfs                         -                              default
HDD-POOL-RAIDZ2  delegation                     on                             default
HDD-POOL-RAIDZ2  autoreplace                    on                             local
HDD-POOL-RAIDZ2  cachefile                      -                              default
HDD-POOL-RAIDZ2  failmode                       wait                           default
HDD-POOL-RAIDZ2  listsnapshots                  off                            default
HDD-POOL-RAIDZ2  autoexpand                     off                            default
HDD-POOL-RAIDZ2  dedupditto                     0                              default
HDD-POOL-RAIDZ2  dedupratio                     1.00x                          -
HDD-POOL-RAIDZ2  free                           2,2T                           -
HDD-POOL-RAIDZ2  allocated                      3,3T                           -
HDD-POOL-RAIDZ2  readonly                       off                            -
HDD-POOL-RAIDZ2  ashift                         12                             local
HDD-POOL-RAIDZ2  comment                        -                              default
HDD-POOL-RAIDZ2  expandsize                     -                              -
HDD-POOL-RAIDZ2  freeing                        0                              -
HDD-POOL-RAIDZ2  fragmentation                  0%                             -
HDD-POOL-RAIDZ2  leaked                         0                              -
HDD-POOL-RAIDZ2  multihost                      off                            default
HDD-POOL-RAIDZ2  checkpoint                     -                              -
HDD-POOL-RAIDZ2  load_guid                      5781039950970315647            -
HDD-POOL-RAIDZ2  autotrim                       off                            default
HDD-POOL-RAIDZ2  feature@async_destroy          enabled                        local
HDD-POOL-RAIDZ2  feature@empty_bpobj            active                         local
HDD-POOL-RAIDZ2  feature@lz4_compress           active                         local
HDD-POOL-RAIDZ2  feature@multi_vdev_crash_dump  enabled                        local
HDD-POOL-RAIDZ2  feature@spacemap_histogram     active                         local
HDD-POOL-RAIDZ2  feature@enabled_txg            active                         local
HDD-POOL-RAIDZ2  feature@hole_birth             active                         local
HDD-POOL-RAIDZ2  feature@extensible_dataset     active                         local
HDD-POOL-RAIDZ2  feature@embedded_data          active                         local
HDD-POOL-RAIDZ2  feature@bookmarks              enabled                        local
HDD-POOL-RAIDZ2  feature@filesystem_limits      enabled                        local
HDD-POOL-RAIDZ2  feature@large_blocks           enabled                        local
HDD-POOL-RAIDZ2  feature@large_dnode            enabled                        local
HDD-POOL-RAIDZ2  feature@sha512                 enabled                        local
HDD-POOL-RAIDZ2  feature@skein                  enabled                        local
HDD-POOL-RAIDZ2  feature@edonr                  enabled                        local
HDD-POOL-RAIDZ2  feature@userobj_accounting     active                         local
HDD-POOL-RAIDZ2  feature@encryption             enabled                        local
HDD-POOL-RAIDZ2  feature@project_quota          active                         local
HDD-POOL-RAIDZ2  feature@device_removal         enabled                        local
HDD-POOL-RAIDZ2  feature@obsolete_counts        enabled                        local
HDD-POOL-RAIDZ2  feature@zpool_checkpoint       enabled                        local
HDD-POOL-RAIDZ2  feature@spacemap_v2            active                         local
HDD-POOL-RAIDZ2  feature@allocation_classes     enabled                        local
HDD-POOL-RAIDZ2  feature@resilver_defer         enabled                        local
HDD-POOL-RAIDZ2  feature@bookmark_v2            enabled                        local

All the best
 
Thats it. I found it.
The issue I have been experiencing showed up for the dataset I was using for my initial tests.
I tried the approach on another ZVOL and see. That could be deleted. So the question was: What made that dataset different?
Answer: LVM was being used on that machine (virtual appliance).

On my Ubuntu-Server LVM detected the VG/PV on the newly created dataset / ZVOL:

Code:
#sudo vgdisplay
  --- Volume group ---
  VG Name               pmg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <19,50 GiB
  PE Size               4,00 MiB
  Total PE              4991
  Alloc PE / Size       4384 / 17,12 GiB
  Free  PE / Size       607 / 2,37 GiB
  VG UUID               AYpQ7y-jq91-0klC-Glea-DNFi-Jx1n-rj9Qhd
  
#sudo pvdisplay
  --- Physical volume ---
  PV Name               /dev/zd16p3
  VG Name               pmg
  PV Size               <19,50 GiB / not usable 2,98 MiB
  Allocatable           yes
  PE Size               4,00 MiB
  Total PE              4991
  Free PE               607
  Allocated PE          4384
  PV UUID               1DsRpH-aXwF-mEMs-8aYk-Wxan-MkDd-vMWiTf

Having this breadcrumb of information I started searching for LVM and ZVOL dataset busy on destroy operation.
I have come accross this one on github: https://github.com/openzfs/zfs/issues/3735
There someone states "...On my node it was busy becouse lvm start use it from node - ..."

Next step: how to exclude zd-devices to be excluded from LVM-scans?
Came accross this one: https://www.thegeekdiary.com/lvm-and-multipathing-sample-lvm-filter-strings/
However the "filter" directive / variable is not available in /etc/lvm/lvm.conf
What is even more confusing: Neither man-pages nor the Ubuntu-docs for 20.04 mention this variable/setting :oops:: http://manpages.ubuntu.com/manpages/focal/man5/lvm.conf.5.html
Last mention was in 14.04 (trusty)

Anyways. I have added the following filter directive to /etc/lvm/lvm.conf in the devices{} section:
Code:
    ### CUSTOM START
    filter = [ "r|^/dev/zd*|" ]
    global_filter = [ "r|^/dev/zd*|" ]
    ### CUSTOM END

Did an "update-initramfs -u" and rebooted the server. Now I am able to delete the dataset!
Thats it.

Thanks @Alwin for your help!
 
thanks so much. I ran into this was stumped. solved it another way which didnt require a reboot but is only a temp solution.
used lvs, vgs, and pvs to determine the lvm members and then lvremove,vgremove,pvremove to remove them. Then was able to destory the zfs zvol on the host machine

basically I was testing proxmox in a kvm machine backed by zfs zvol. created additional disks (also zfs zvol) to test ceph. When I destroyed the vm the ceph disks couldnt not be destroyed due the the OS picking them up as LVM .

Yes tburger's solutions is better long term if you intend to keep using proxmox. This solution is more of a 'dont want to reboot' and 'wont be passing this way again' path.

thanks for all the work on proxmox. learned some about ceph with my virtual 3 node 9 osd testbed.
 
Just wanted to let you know that for me (PVE 5.3) it was not LVM, but it was multipath that probably was blocking the dataset. so just stopping "multipath-tools" service helped. So thanks a lot for mentioning this on this thread, I wouldn't have thought about it. I added also some rules mentioned above to multipath.conf blacklist but have not tested them.

By the way I believe I have stock lvm.conf (i.e. not nodified by me; and file date is 2017-10-13) and it already has the following:
Code:
        # Do not scan ZFS zvols (to avoid problems on ZFS zvols snapshots)
        global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" ]
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!