[SOLVED] zfs import error

SnejPro

New Member
Feb 28, 2022
15
3
3
Olching, Germany
Hi,

my system suddenly stopped working and i cannot import both of my zfs-pools (one rpool proxmox is running on and one data-pool, where i store my data - no VMs).

Even when i plug in the SSDs to another system and run zpool import -f rpool with an ubuntu-live stick, i got this error in syslog:

Code:
Feb 28 02:18:01 ubuntu kernel: [  144.573471] VERIFY3(rs_get_start(rs, rt) <= start) failed (420394233856 <= 420394225664)
Feb 28 02:18:01 ubuntu kernel: [  144.573476] PANIC at range_tree.c:483:range_tree_remove_impl()
Feb 28 02:18:01 ubuntu kernel: [  144.573479] Showing stack for process 4586
Feb 28 02:18:01 ubuntu kernel: [  144.573481] CPU: 7 PID: 4586 Comm: z_wr_iss Tainted: P           O      5.11.0-27-generic #29~20.04.1-Ubuntu
Feb 28 02:18:01 ubuntu kernel: [  144.573484] Hardware name: Micro-Star International Co., Ltd. MS-7C37/MPG X570 GAMING PLUS (MS-7C37), BIOS A.F0 12/16/2021
Feb 28 02:18:01 ubuntu kernel: [  144.573486] Call Trace:
Feb 28 02:18:01 ubuntu kernel: [  144.573490]  dump_stack+0x74/0x92
Feb 28 02:18:01 ubuntu kernel: [  144.573499]  spl_dumpstack+0x29/0x2b [spl]
Feb 28 02:18:01 ubuntu kernel: [  144.573509]  spl_panic+0xd4/0xfc [spl]
Feb 28 02:18:01 ubuntu kernel: [  144.573518]  ? zfs_btree_insert_into_leaf+0x1c6/0x230 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.573603]  ? bmov+0x17/0x20 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.573684]  ? zfs_btree_remove_from_node+0xf1/0x4e0 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.573765]  ? zfs_btree_find_parent_idx+0x81/0xd0 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.573845]  ? zfs_btree_add_idx+0xde/0x230 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.573926]  ? zfs_btree_next_helper+0x76/0x190 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.574007]  range_tree_remove_impl+0x815/0xf90 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.574113]  range_tree_remove+0x10/0x20 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.574206]  space_map_load_callback+0x27/0x90 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.574315]  space_map_iterate+0x1e1/0x3e0 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.574414]  ? __schedule+0x454/0x8a0
Feb 28 02:18:01 ubuntu kernel: [  144.574418]  ? spa_stats_destroy+0x1c0/0x1c0 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.574513]  space_map_load_length+0x61/0xe0 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.574606]  metaslab_load+0x160/0x810 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.574705]  ? range_tree_add_impl+0x7ff/0xfd0 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.574800]  metaslab_activate+0x4c/0x240 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.574890]  ? metaslab_set_selected_txg+0x90/0xc0 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.574979]  metaslab_alloc_dva+0x158/0x1110 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.575067]  metaslab_alloc+0xb2/0x240 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.575155]  zio_dva_allocate+0xe6/0x850 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.575266]  ? _cond_resched+0x19/0x30
Feb 28 02:18:01 ubuntu kernel: [  144.575269]  ? mutex_lock+0x13/0x40
Feb 28 02:18:01 ubuntu kernel: [  144.575272]  ? tsd_hash_search.isra.0+0x47/0xa0 [spl]
Feb 28 02:18:01 ubuntu kernel: [  144.575282]  zio_execute+0x93/0xf0 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.575386]  taskq_thread+0x2f6/0x500 [spl]
Feb 28 02:18:01 ubuntu kernel: [  144.575396]  ? wake_up_q+0xa0/0xa0
Feb 28 02:18:01 ubuntu kernel: [  144.575401]  ? zio_taskq_member.isra.0.constprop.0+0x60/0x60 [zfs]
Feb 28 02:18:01 ubuntu kernel: [  144.575499]  kthread+0x114/0x150
Feb 28 02:18:01 ubuntu kernel: [  144.575502]  ? task_done+0xb0/0xb0 [spl]
Feb 28 02:18:01 ubuntu kernel: [  144.575511]  ? kthread_park+0x90/0x90
Feb 28 02:18:01 ubuntu kernel: [  144.575514]  ret_from_fork+0x22/0x30

When i mount it readonly it works.

So i have two questions:
- is it possible to repair the pools? import -F does not change the output of import -f afterwards.
- if it is not possible to repair the pools, how can i copy the VMs from the rpool to a new proxmox-installation

Greetings

Jens
 
zpool status says all ONLINE and no data errors, i can give you the output in a few minutes.

The last scrub ran at 13-02-2022, also with no errors.
 
But you can't be sure that your pool isn't damaged if you don't run it again after these problems.
 
Code:
root@ubuntu:/home/ubuntu# lsb_release -a
No LSB modules are available.
Distributor ID:    Ubuntu
Description:    Ubuntu 20.04.3 LTS
Release:    20.04
Codename:    focal
root@ubuntu:/home/ubuntu# zpool import
   pool: rpool
     id: 13700702169853694290
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
    the '-f' flag.
   see: http://zfsonlinux.org/msg/ZFS-8000-EY
 config:

    rpool                                              ONLINE
      mirror-0                                         ONLINE
        ata-WDC_WDS500G1R0A-68A4W0_21211Z800470-part3  ONLINE
        ata-WDC_WDS500G1R0A-68A4W0_21211Z800559-part3  ONLINE
root@ubuntu:/home/ubuntu# zpool import -f -R /a -o readonly=on rpool
root@ubuntu:/home/ubuntu# zpool status
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 0 days 00:08:33 with 0 errors on Sun Feb 13 00:32:36 2022
config:

    NAME                                               STATE     READ WRITE CKSUM
    rpool                                              ONLINE       0     0     0
      mirror-0                                         ONLINE       0     0     0
        ata-WDC_WDS500G1R0A-68A4W0_21211Z800470-part3  ONLINE       0     0     0
        ata-WDC_WDS500G1R0A-68A4W0_21211Z800559-part3  ONLINE       0     0     0

errors: No known data errors
 
For future readers:
I've imported the pool readonly, copied the zfs volumes of the VMs and reinstalled proxmox. Afterwards i've imported the volumes and the VMs are working like before.

And i changed my setup to a server with ECC-ram, because i suspect that a memory error corrupted the pool.

Thanks for your help :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!