ZFS: dataset busy

lecbm

New Member
Apr 26, 2023
7
0
1
Hello,

I've seen a lot of threads about the error "zfs error: cannot XYZ ... : dataset is busy" and also on our site this happens quite often (~ every 3-4 weeks on a cluster with 6 nodes). As in the other threads suggested, a reboot solves this problem but is not applicable for a production server.

The problem with this is, that one cannot interact with the containers anymore. They are still running and working, but rollback, destroy and starting on the containers is not working.

For example:
Code:
()
TASK ERROR: zfs error: cannot destroy 'rpool/data/subvol-65002-disk-0': dataset is busy

This is the output of ps auxf | grep 65002 (65002 being the container that I tried to rollback:

Code:
ps auxf | grep 65002
root     2628155  0.0  0.0   6240   716 pts/0    S+   13:27   0:00                      \_ grep 65002
root     2225583  0.0  0.0   8832  4024 ?        D    Sep26   0:00 zfs unmount rpool/data/subvol-65002-disk-0
root     2799512  0.0  0.0   8832  4116 ?        D    Sep26   0:00 zfs mount rpool/data/subvol-65002-disk-0
root     2833006  0.0  0.0   8832  4088 ?        D    Sep26   0:00 zfs mount rpool/data/subvol-65002-disk-0
root     2838196  0.0  0.0   8832  4144 ?        D    Sep26   0:00 zfs mount rpool/data/subvol-65002-disk-0
root     2852238  0.0  0.0   8832  4116 ?        D    Sep26   0:00 zfs mount rpool/data/subvol-65002-disk-0
root     2647640  0.0  0.0   8832  4132 ?        D    Sep28   0:00 zfs mount rpool/data/subvol-65002-disk-0
root      524203  0.0  0.0   8832  4180 ?        D    09:16   0:00 zfs mount rpool/data/subvol-65002-disk-0

Sadly the tasks seem to be stuck and I cannot kill them.

Do you have any suggestion on how I can solve this problem without having to reboot?
Does anybody know where this problem comes from? I read that it may be due to a heavy IO load on the ZFS pools, can anyone confirm or explain what I can do against it?

Thanks for your help!

Additional information:
Code:
~# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-3
pve-kernel-5.11: 7.0-10
pve-kernel-5.4: 6.4-6
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-1
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.6
libpve-storage-perl: 7.4-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-2
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1
 
Here is zpool status:

Code:
zpool status
  pool: rpool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 02:49:19 with 0 errors on Sun Sep 10 03:13:20 2023
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda3    ONLINE       0     0     0
            sdb3    ONLINE       0     0     0

errors: No known data errors

  pool: ssd
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:10:56 with 0 errors on Sun Sep 10 00:35:02 2023
config:

        NAME                                               STATE     READ WRITE CKSUM
        ssd                                                ONLINE       0     0     0
          mirror-0                                         ONLINE       0     0     0
            ata-SAMSUNG_MZ7LN1T0HAJQ-00000_S3TXNX0MXXXXX  ONLINE       0     0     0
            ata-SAMSUNG_MZ7LN1T0HAJQ-00000_S3TXNX0NXXXXZ  ONLINE       0     0     0

errors: No known data errors

Here is the output of zfs list
Code:
zfs list
NAME                                    USED  AVAIL     REFER  MOUNTPOINT
rpool                                   624G  2.90T      104K  /rpool
rpool/ROOT                              458G  2.90T       96K  /rpool/ROOT
rpool/ROOT/pve-1                        458G  2.90T      458G  /
rpool/data                              166G  2.90T      160K  /rpool/data
rpool/data/base-10252-disk-0           17.7G  2.90T     17.7G  -
rpool/data/base-10252-disk-1             76K  2.90T       68K  -
rpool/data/subvol-10152-disk-0         65.0G  37.0G     43.0G  /rpool/data/subvol-10152-disk-0
rpool/data/subvol-10153-disk-0          398M  15.6G      398M  /rpool/data/subvol-10153-disk-0
rpool/data/subvol-130122-disk-0        8.28G  59.1G     4.93G  /rpool/data/subvol-130122-disk-0
rpool/data/subvol-64002-disk-0         2.31G  13.9G     2.14G  /rpool/data/subvol-64002-disk-0
rpool/data/subvol-64003-disk-0         1.05G  15.1G      894M  /rpool/data/subvol-64003-disk-0
rpool/data/subvol-64004-disk-0         1.94G  14.2G     1.76G  /rpool/data/subvol-64004-disk-0
rpool/data/subvol-64005-disk-0         7.15G  9.01G     6.99G  /rpool/data/subvol-64005-disk-0
rpool/data/subvol-64007-disk-0         4.62G  13.6G     4.40G  /rpool/data/subvol-64007-disk-0
rpool/data/subvol-64011-disk-0         1.63G  30.6G     1.45G  /rpool/data/subvol-64011-disk-0
rpool/data/subvol-65002-disk-0          456M  7.55G      456M  /rpool/data/subvol-65002-disk-0
rpool/data/subvol-65003-disk-0         1.40G  6.79G     1.21G  /rpool/data/subvol-65003-disk-0
rpool/data/subvol-65004-disk-0         1.61G  6.58G     1.42G  /rpool/data/subvol-65004-disk-0
rpool/data/subvol-65005-disk-0         6.75G  9.43G     6.57G  /rpool/data/subvol-65005-disk-0
rpool/data/subvol-65006-disk-0          924M  7.28G      733M  /rpool/data/subvol-65006-disk-0
rpool/data/subvol-65009-disk-0         1.10G  7.08G      940M  /rpool/data/subvol-65009-disk-0
rpool/data/subvol-65011-disk-0         1.83G  6.35G     1.65G  /rpool/data/subvol-65011-disk-0
rpool/data/subvol-65012-disk-0         4.53G  3.88G     4.12G  /rpool/data/subvol-65012-disk-0
rpool/data/subvol-65013-disk-0          990M  7.32G      692M  /rpool/data/subvol-65013-disk-0
rpool/data/subvol-8121-disk-0          10.6G  21.4G     10.6G  /rpool/data/subvol-8121-disk-0
rpool/data/vm-10213-disk-0             22.7G  2.90T     22.7G  -
rpool/data/vm-64001-disk-0             3.07G  2.90T     3.07G  -
ssd                                     190G   732G      152K  /ssd
ssd/subvol-101-disk-0                   398M  15.6G      398M  /ssd/subvol-101-disk-0
ssd/subvol-101-disk-1                   397M  7.61G      397M  /ssd/subvol-101-disk-1
ssd/subvol-102-disk-0                   397M  15.6G      397M  /ssd/subvol-102-disk-0
ssd/subvol-103-disk-0                   398M  15.6G      398M  /ssd/subvol-103-disk-0
ssd/subvol-104-disk-0                   435M  31.6G      435M  /ssd/subvol-104-disk-0
ssd/subvol-105-disk-0                   437M  15.6G      435M  /ssd/subvol-105-disk-0
ssd/subvol-106-disk-0                   437M  15.6G      435M  /ssd/subvol-106-disk-0
ssd/subvol-107-disk-0                   435M  15.6G      435M  /ssd/subvol-107-disk-0
ssd/subvol-200-disk-0                   397M  7.61G      397M  /ssd/subvol-200-disk-0
ssd/subvol-64002-disk-0                3.07G  12.9G     3.07G  /ssd/subvol-64002-disk-0
ssd/subvol-64003-disk-0                1.69G  14.4G     1.59G  /ssd/subvol-64003-disk-0
ssd/subvol-64004-disk-0                3.39G  12.6G     3.39G  /ssd/subvol-64004-disk-0
ssd/subvol-64005-disk-0                27.6G  8.56G     23.4G  /ssd/subvol-64005-disk-0
ssd/subvol-64006-disk-0                5.14G  12.8G     3.17G  /ssd/subvol-64006-disk-0
ssd/subvol-64007-disk-0                63.6G  3.38G     60.6G  /ssd/subvol-64007-disk-0
ssd/subvol-64008-disk-0                17.1G  14.9G     17.1G  /ssd/subvol-64008-disk-0
ssd/subvol-64009-disk-0                2.63G  5.37G     2.63G  /ssd/subvol-64009-disk-0
ssd/subvol-64010-disk-0                33.9G  32.6G     15.4G  /ssd/subvol-64010-disk-0
ssd/subvol-64011-disk-0                7.44G  24.6G     7.44G  /ssd/subvol-64011-disk-0
ssd/subvol-64012-disk-0                4.58G  35.5G     4.53G  /ssd/subvol-64012-disk-0
ssd/subvol-64013-disk-0                5.36G  26.7G     5.31G  /ssd/subvol-64013-disk-0
ssd/subvol-64014-disk-0                1.59G  30.4G     1.59G  /ssd/subvol-64014-disk-0
ssd/vm-64001-disk-0                    8.21G   732G     4.87G  -
ssd/vm-64001-state-BEFORE_UPGRADE_231  1.16G   732G     1.16G  -
 
Okey looks good so far can I also see your

lsblk

and

journalctl --since '2023-10-03' > $(hostname)-journal.txt?
 
Last edited:
lsblk's outputs:
Code:
sudo lsblk
NAME     MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda        8:0    1   3.6T  0 disk
├─sda1     8:1    1  1007K  0 part
├─sda2     8:2    1   512M  0 part
└─sda3     8:3    1   3.6T  0 part
sdb        8:16   1   3.6T  0 disk
├─sdb1     8:17   1  1007K  0 part
├─sdb2     8:18   1   512M  0 part
└─sdb3     8:19   1   3.6T  0 part
sdc        8:32   1 953.9G  0 disk
├─sdc1     8:33   1 953.9G  0 part
└─sdc9     8:41   1     8M  0 part
sdd        8:48   1 953.9G  0 disk
├─sdd1     8:49   1 953.9G  0 part
└─sdd9     8:57   1     8M  0 part
zd0      230:0    0    32G  0 disk
├─zd0p1  230:1    0   512M  0 part
├─zd0p2  230:2    0     1K  0 part
└─zd0p5  230:5    0  31.5G  0 part
zd16     230:16   0    32G  0 disk
├─zd16p1 230:17   0    50M  0 part
├─zd16p2 230:18   0  31.5G  0 part
└─zd16p3 230:19   0   505M  0 part
zd32     230:32   0    32G  0 disk
├─zd32p1 230:33   0   260M  0 part
├─zd32p2 230:34   0   512K  0 part
├─zd32p3 230:35   0  23.7G  0 part
└─zd32p4 230:36   0     8G  0 part
zd48     230:48   0     4M  0 disk
zd64     230:64   0   4.5G  0 disk
zd80     230:80   0    32G  0 disk
├─zd80p1 230:81   0   260M  0 part
├─zd80p2 230:82   0   512K  0 part
├─zd80p3 230:83   0  23.7G  0 part
└─zd80p4 230:84   0     8G  0 part

The command for the journalctl outputs more than 250mb. How can I securely transmit it to you?
 
Yes sorry I used the wrong date before. Does journalctl --since '2023-10-03' > $(hostname)-journal.txt produce a shorter output?
 
It's still a very long output of more than 70000 lines (when using journalctl --since '2023-10-03' --until '2023-10-10' > $(hostname)-journal.txt). Also I don't feel comfortable to paste the output in a public forum. There is some information that I don't want to be published.
 
No idea? This problem is becoming more frequent as we're adding new servers to the cluster.
 
The Problem still occurs in Proxmox Virtual Environment 8.1.4. I cannot really reproduce it, but most of the time it happens once I use a script to reset multiple containers one after another. Something like this:


Bash:
#!/bin/bash
for i in `seq 100 200`; do sudo pct stop $i; sudo pct delsnapshot $i MY_LATEST_SNAPSHOT; sudo pct rollback $i FRESH_INSTALL; sudo pct start $i; sleep 5; echo "done"; done

I've already included the `sleep 5` to take some load of the server. Do you think this could be the problem?
 
Last edited:
Hello

Sorry, It seems I forgot to answer you.

In my experience, when the log is extremely long (like in your case) it is because some tasks are very verbose about it. I usually look for output that is very frequently repeated and then filter it out by appending grep -v.

Something like
Code:
journalctl --since ... |grep -v "something" > file
should do the trick
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!