ZFS snapshots fail frequently after upgrade to 6.3-3

Dec 7, 2020
4
0
6
63
Today I upgraded 2 of 3 nodes in a cluster from

pve-manager/6.2-15/48bd51b6
Linux 5.4.73-1-pve #1 SMP PVE 5.4.73-1 (Mon, 16 Nov 2020 10:52:16 +0100)

to

pve-manager/6.3-3/eee5f901
Linux 5.4.78-2-pve #1 SMP PVE 5.4.78-2 (Thu, 03 Dec 2020 14:26:17 +0100)

and anything involving ZFS snapshots seems to have either broken or become horribly slow on the upgraded nodes.

Replication of a container gets me:

2020-12-07 21:45:01 100-0: start replication job
2020-12-07 21:45:01 100-0: guest => CT 100, running => 1
2020-12-07 21:45:01 100-0: volumes => local-zfs:subvol-100-disk-1
2020-12-07 21:45:02 100-0: freeze guest filesystem
2020-12-07 21:45:03 100-0: create snapshot '__replicate_100-0_1607377501__' on local-zfs:subvol-100-disk-1
2020-12-07 22:22:53 100-0: thaw guest filesystem
2020-12-07 22:22:53 100-0: end replication job with error: command 'zfs snapshot rpool/data/subvol-100-disk-1@__replicate_100-0_1607377501__' failed: got timeout

I explicitly take a snapshot and what in the past was essentially instantaneous hangs for quite a time with the status as "prepare". (I'll add a note here if it ever moves along to giving me some output.) [It appears to be a coordination issue between PVE and the underlying ZFS reality. In the GUI the snapshot job never finished and the status is still saying "prepare". If I go to the CLI and look in the snapshot on disk it appears to be quite normal.] I try one on the node still running 6.2, and it works fine.

Migration is pretty iffy. Rebooting one of the upgraded nodes for the second time after upgrade was very slow as it had trouble unmounting the filesystem for one of the containers. Etc. I also note that the system load overall has risen and CPU idle % dropped on the nodes.

Where do I look next?
 
Last edited:
It seems to be load related. And while it is true that the nodes in question are not cutting edge (Intel(R) Xeon(R) CPU E5645 @ 2.40GHz w/ ZFS mirrors of 7200 rpm SATA II drives), I'm still a bit surprised. Let's call the nodes E, M, and V. If E and V happen to both try send replication snapshots to M at the same time M is trying to create and send a snapshot to V, you get the following:

E -> M creeps along


Code:
2020-12-08 16:40:00 102-0: start replication job
2020-12-08 16:40:00 102-0: guest => VM 102, running => 27365
2020-12-08 16:40:00 102-0: volumes => local-zfs:vm-102-disk-0
2020-12-08 16:40:02 102-0: freeze guest filesystem
2020-12-08 16:40:02 102-0: create snapshot '__replicate_102-0_1607445600__' on local-zfs:vm-102-disk-0
2020-12-08 16:40:02 102-0: thaw guest filesystem
2020-12-08 16:40:02 102-0: using secure transmission, rate limit: none
2020-12-08 16:40:02 102-0: full sync 'local-zfs:vm-102-disk-0' (__replicate_102-0_1607445600__)
2020-12-08 16:40:04 102-0: full send of rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__ estimated size is 39.2G
2020-12-08 16:40:04 102-0: total estimated size is 39.2G
2020-12-08 16:40:04 102-0: TIME SENT SNAPSHOT rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:40:05 102-0: 16:40:05 2.89M rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:40:06 102-0: 16:40:06 2.89M rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:40:07 102-0: 16:40:07 49.5M rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:40:08 102-0: 16:40:08 121M rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:40:09 102-0: 16:40:09 151M rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:40:10 102-0: 16:40:10 188M rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:40:11 102-0: 16:40:11 219M rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:40:12 102-0: 16:40:12 255M rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:40:13 102-0: 16:40:13 295M rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:40:14 102-0: 16:40:14 321M rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:40:15 102-0: 16:40:15 353M rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:40:16 102-0: 16:40:16 378M rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
<snip>
2020-12-08 16:48:07 102-0: 16:48:07 5.61G rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:48:08 102-0: 16:48:08 5.61G rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:48:09 102-0: 16:48:09 5.61G rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:48:10 102-0: 16:48:10 5.62G rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:48:11 102-0: 16:48:11 5.62G rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:48:12 102-0: 16:48:12 5.62G rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 16:48:13 102-0: 16:48:13 5.62G rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
<snip>
2020-12-08 17:35:50 102-0: 17:35:50   11.1G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:35:51 102-0: 17:35:51   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:35:52 102-0: 17:35:52   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:35:53 102-0: 17:35:53   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:35:54 102-0: 17:35:54   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:35:55 102-0: 17:35:55   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:35:56 102-0: 17:35:56   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:35:57 102-0: 17:35:57   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:35:58 102-0: 17:35:58   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:35:59 102-0: 17:35:59   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:36:00 102-0: 17:36:00   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:36:01 102-0: 17:36:01   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:36:02 102-0: 17:36:02   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:36:03 102-0: 17:36:03   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:36:04 102-0: 17:36:04   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:36:05 102-0: 17:36:05   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:36:06 102-0: 17:36:06   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:36:07 102-0: 17:36:07   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:36:08 102-0: 17:36:08   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:36:09 102-0: 17:36:09   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:36:10 102-0: 17:36:10   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:36:11 102-0: 17:36:11   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:36:12 102-0: 17:36:12   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:36:13 102-0: 17:36:13   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:36:14 102-0: 17:36:14   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__
2020-12-08 17:36:15 102-0: 17:36:15   11.2G   rpool/data/vm-102-disk-0@__replicate_102-0_1607445600__


V -> M stalls

Code:
2020-12-08 16:45:01 115-1: start replication job
2020-12-08 16:45:01 115-1: guest => CT 115, running => 1
2020-12-08 16:45:01 115-1: volumes => local-zfs:subvol-115-disk-0
2020-12-08 16:45:02 115-1: freeze guest filesystem
2020-12-08 16:45:02 115-1: create snapshot '__replicate_115-1_1607445901__' on local-zfs:subvol-115-disk-0
2020-12-08 16:45:05 115-1: thaw guest filesystem
2020-12-08 16:45:05 115-1: using secure transmission, rate limit: none
2020-12-08 16:45:05 115-1: incremental sync 'local-zfs:subvol-115-disk-0' (__replicate_115-1_1607445001__ => __replicate_115-1_1607445901__)
and that was copied at 17:30

M fails, per the GUI, to create a snapshot, and it's ugly, as the guest filesystem is frozen for 40 minutes:

Code:
2020-12-08 16:45:01 100-1: start replication job
2020-12-08 16:45:01 100-1: guest => CT 100, running => 1
2020-12-08 16:45:01 100-1: volumes => local-zfs:subvol-100-disk-1
2020-12-08 16:45:02 100-1: freeze guest filesystem
2020-12-08 16:45:03 100-1: create snapshot '__replicate_100-1_1607445901__' on local-zfs:subvol-100-disk-1
2020-12-08 17:25:36 100-1: thaw guest filesystem
2020-12-08 17:25:36 100-1: end replication job with error: command 'zfs snapshot rpool/data/subvol-100-disk-1@__replicate_100-1_1607445901__' failed: got timeout

however, if you ask ZFS directly from the CLI, the snapshot does exist:

Code:
rpool/data/subvol-100-disk-1                                  497M  7.52G      496M  /rpool/data/subvol-100-disk-1
rpool/data/subvol-100-disk-1@test_2                             0B      -      496M  -
rpool/data/subvol-100-disk-1@test_from_6_2                      0B      -      496M  -
rpool/data/subvol-100-disk-1@__replicate_100-1_1607445001__   208K      -      496M  -
rpool/data/subvol-100-disk-1@__replicate_100-1_1607445901__   192K      -      496M  -

I don't recall issues like this on the old version of PVE, but will freely admit that I wasn't paying a lot of attention to this aspect.
 
Last edited:
From M:

Code:
Dec  8 16:44:43 moldycrow kernel: [61506.574744] INFO: task txg_sync:15752 blocked for more than 120 seconds.
Dec  8 16:44:43 moldycrow kernel: [61506.574780]       Tainted: P           O      5.4.78-2-pve #1
Dec  8 16:44:43 moldycrow kernel: [61506.574800] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec  8 16:44:43 moldycrow kernel: [61506.574827] txg_sync        D    0 15752      2 0x80004000
Dec  8 16:44:43 moldycrow kernel: [61506.574829] Call Trace:
Dec  8 16:44:43 moldycrow kernel: [61506.574840]  __schedule+0x2e6/0x6f0
Dec  8 16:44:43 moldycrow kernel: [61506.574842]  schedule+0x33/0xa0
Dec  8 16:44:43 moldycrow kernel: [61506.574846]  schedule_timeout+0x152/0x330
Dec  8 16:44:43 moldycrow kernel: [61506.574851]  ? __next_timer_interrupt+0xd0/0xd0
Dec  8 16:44:43 moldycrow kernel: [61506.574853]  io_schedule_timeout+0x1e/0x50
Dec  8 16:44:43 moldycrow kernel: [61506.574862]  __cv_timedwait_common+0x12f/0x170 [spl]
Dec  8 16:44:43 moldycrow kernel: [61506.574864]  ? wait_woken+0x80/0x80
Dec  8 16:44:43 moldycrow kernel: [61506.574869]  __cv_timedwait_io+0x19/0x20 [spl]
Dec  8 16:44:43 moldycrow kernel: [61506.574942]  zio_wait+0x130/0x270 [zfs]
Dec  8 16:44:43 moldycrow kernel: [61506.574943]  ? _cond_resched+0x19/0x30
Dec  8 16:44:43 moldycrow kernel: [61506.574989]  dsl_pool_sync+0xdc/0x500 [zfs]
Dec  8 16:44:43 moldycrow kernel: [61506.575039]  spa_sync+0x5a7/0xfa0 [zfs]
Dec  8 16:44:43 moldycrow kernel: [61506.575040]  ? _cond_resched+0x19/0x30
Dec  8 16:44:43 moldycrow kernel: [61506.575091]  ? spa_txg_history_init_io+0x104/0x110 [zfs]
Dec  8 16:44:43 moldycrow kernel: [61506.575141]  txg_sync_thread+0x2d6/0x480 [zfs]
Dec  8 16:44:43 moldycrow kernel: [61506.575192]  ? txg_thread_exit.isra.13+0x60/0x60 [zfs]
Dec  8 16:44:43 moldycrow kernel: [61506.575198]  thread_generic_wrapper+0x74/0x90 [spl]
Dec  8 16:44:43 moldycrow kernel: [61506.575202]  kthread+0x120/0x140
Dec  8 16:44:43 moldycrow kernel: [61506.575207]  ? __thread_exit+0x20/0x20 [spl]
Dec  8 16:44:43 moldycrow kernel: [61506.575209]  ? kthread_park+0x90/0x90
Dec  8 16:44:43 moldycrow kernel: [61506.575211]  ret_from_fork+0x35/0x40
Dec  8 16:48:45 moldycrow kernel: [61748.242918] INFO: task zfs:467 blocked for more than 120 seconds.
Dec  8 16:48:45 moldycrow kernel: [61748.242950]       Tainted: P           O      5.4.78-2-pve #1
Dec  8 16:48:45 moldycrow kernel: [61748.242970] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec  8 16:48:45 moldycrow kernel: [61748.242997] zfs             D    0   467  32722 0x00004004
Dec  8 16:48:45 moldycrow kernel: [61748.242999] Call Trace:
Dec  8 16:48:45 moldycrow kernel: [61748.243010]  __schedule+0x2e6/0x6f0
Dec  8 16:48:45 moldycrow kernel: [61748.243011]  schedule+0x33/0xa0
Dec  8 16:48:45 moldycrow kernel: [61748.243013]  io_schedule+0x16/0x40
Dec  8 16:48:45 moldycrow kernel: [61748.243023]  cv_wait_common+0xb5/0x130 [spl]
Dec  8 16:48:45 moldycrow kernel: [61748.243026]  ? wait_woken+0x80/0x80
Dec  8 16:48:45 moldycrow kernel: [61748.243030]  __cv_wait_io+0x18/0x20 [spl]
Dec  8 16:48:45 moldycrow kernel: [61748.243106]  txg_wait_synced_impl+0xc9/0x110 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.243156]  txg_wait_synced+0x10/0x40 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.243203]  dsl_sync_task_common+0x1b5/0x290 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.243245]  ? dsl_dataset_hold+0x20/0x20 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.243288]  ? dsl_dataset_snapshot_sync_impl+0x800/0x800 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.243331]  ? dsl_dataset_hold+0x20/0x20 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.243374]  ? dsl_dataset_snapshot_sync_impl+0x800/0x800 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.243420]  dsl_sync_task+0x1a/0x20 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.243462]  dsl_dataset_snapshot+0x131/0x360 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.243468]  ? spl_vmem_alloc+0x19/0x20 [spl]
Dec  8 16:48:45 moldycrow kernel: [61748.243477]  ? nvt_add_nvpair+0xc6/0x110 [znvpair]
Dec  8 16:48:45 moldycrow kernel: [61748.243479]  ? _cond_resched+0x19/0x30
Dec  8 16:48:45 moldycrow kernel: [61748.243481]  ? __kmalloc_node+0x1e0/0x330
Dec  8 16:48:45 moldycrow kernel: [61748.243483]  ? _cond_resched+0x19/0x30
Dec  8 16:48:45 moldycrow kernel: [61748.243487]  ? spl_kmem_alloc_impl+0xe8/0x130 [spl]
Dec  8 16:48:45 moldycrow kernel: [61748.243492]  ? nvt_lookup_name_type.isra.55+0x77/0xb0 [znvpair]
Dec  8 16:48:45 moldycrow kernel: [61748.243496]  ? nvlist_lookup_common+0x63/0x80 [znvpair]
Dec  8 16:48:45 moldycrow kernel: [61748.243547]  zfs_ioc_snapshot+0x270/0x360 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.243599]  zfsdev_ioctl+0x1e0/0x8f0 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.243604]  do_vfs_ioctl+0xa9/0x640
Dec  8 16:48:45 moldycrow kernel: [61748.243608]  ? handle_mm_fault+0xc9/0x1f0
Dec  8 16:48:45 moldycrow kernel: [61748.243609]  ksys_ioctl+0x67/0x90
Dec  8 16:48:45 moldycrow kernel: [61748.243611]  __x64_sys_ioctl+0x1a/0x20
Dec  8 16:48:45 moldycrow kernel: [61748.243616]  do_syscall_64+0x57/0x190
Dec  8 16:48:45 moldycrow kernel: [61748.243619]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Dec  8 16:48:45 moldycrow kernel: [61748.243622] RIP: 0033:0x7f1fb6fe8427
Dec  8 16:48:45 moldycrow kernel: [61748.243626] Code: Bad RIP value.
Dec  8 16:48:45 moldycrow kernel: [61748.243627] RSP: 002b:00007ffddeb83538 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Dec  8 16:48:45 moldycrow kernel: [61748.243629] RAX: ffffffffffffffda RBX: 00007ffddeb83560 RCX: 00007f1fb6fe8427
Dec  8 16:48:45 moldycrow kernel: [61748.243630] RDX: 00007ffddeb83560 RSI: 0000000000005a23 RDI: 0000000000000006
Dec  8 16:48:45 moldycrow kernel: [61748.243631] RBP: 00007ffddeb86b40 R08: 0000000000000000 R09: 00007f1fb70b3d60
Dec  8 16:48:45 moldycrow kernel: [61748.243632] R10: fffffffffffff000 R11: 0000000000000246 R12: 00007ffddeb86cb8
Dec  8 16:48:45 moldycrow kernel: [61748.243633] R13: 0000000000005a23 R14: 0000000000000006 R15: 0000000000005a23
Dec  8 16:48:45 moldycrow kernel: [61748.243636] INFO: task zfs:868 blocked for more than 120 seconds.
Dec  8 16:48:45 moldycrow kernel: [61748.243658]       Tainted: P           O      5.4.78-2-pve #1
Dec  8 16:48:45 moldycrow kernel: [61748.243678] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec  8 16:48:45 moldycrow kernel: [61748.243704] zfs             D    0   868    803 0x00004000
Dec  8 16:48:45 moldycrow kernel: [61748.243705] Call Trace:
Dec  8 16:48:45 moldycrow kernel: [61748.243708]  __schedule+0x2e6/0x6f0
Dec  8 16:48:45 moldycrow kernel: [61748.243713]  ? spl_kmem_alloc+0xec/0x140 [spl]
Dec  8 16:48:45 moldycrow kernel: [61748.243715]  schedule+0x33/0xa0
Dec  8 16:48:45 moldycrow kernel: [61748.243716]  io_schedule+0x16/0x40
Dec  8 16:48:45 moldycrow kernel: [61748.243720]  cv_wait_common+0xb5/0x130 [spl]
Dec  8 16:48:45 moldycrow kernel: [61748.243721]  ? wait_woken+0x80/0x80
Dec  8 16:48:45 moldycrow kernel: [61748.243725]  __cv_wait_io+0x18/0x20 [spl]
Dec  8 16:48:45 moldycrow kernel: [61748.243776]  txg_wait_synced_impl+0xc9/0x110 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.243827]  txg_wait_synced+0x10/0x40 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.243873]  dsl_sync_task_common+0x1b5/0x290 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.243913]  ? dmu_recv_begin_sync+0x880/0x880 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.243953]  ? receive_cksum+0x30/0x30 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.243993]  ? dmu_recv_begin_sync+0x880/0x880 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.244033]  ? receive_cksum+0x30/0x30 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.244080]  dsl_sync_task+0x1a/0x20 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.244120]  dmu_recv_begin+0x173/0x260 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.244172]  zfs_ioc_recv_impl+0xe3/0x10e0 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.244177]  ? nvt_add_nvpair+0xc6/0x110 [znvpair]
Dec  8 16:48:45 moldycrow kernel: [61748.244181]  ? nvs_native_nvp_op+0x1f0/0x1f0 [znvpair]
Dec  8 16:48:45 moldycrow kernel: [61748.244185]  ? nvs_operation+0x175/0x310 [znvpair]
Dec  8 16:48:45 moldycrow kernel: [61748.244238]  zfs_ioc_recv+0x19a/0x340 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.244245]  ? ns_capable_common+0x2f/0x50
Dec  8 16:48:45 moldycrow kernel: [61748.244246]  ? capable+0x19/0x20
Dec  8 16:48:45 moldycrow kernel: [61748.244294]  ? priv_policy.isra.3.part.4+0x11/0x20 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.244342]  ? secpolicy_zinject+0x3a/0x40 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.244343]  ? _cond_resched+0x19/0x30
Dec  8 16:48:45 moldycrow kernel: [61748.244345]  ? __kmalloc+0x197/0x280
Dec  8 16:48:45 moldycrow kernel: [61748.244349]  ? strdup+0x45/0x70 [spl]
Dec  8 16:48:45 moldycrow kernel: [61748.244401]  zfsdev_ioctl+0x6db/0x8f0 [zfs]
Dec  8 16:48:45 moldycrow kernel: [61748.244405]  ? lru_cache_add_active_or_unevictable+0x39/0xb0
Dec  8 16:48:45 moldycrow kernel: [61748.244408]  do_vfs_ioctl+0xa9/0x640
Dec  8 16:48:45 moldycrow kernel: [61748.244410]  ? handle_mm_fault+0xc9/0x1f0
Dec  8 16:48:45 moldycrow kernel: [61748.244411]  ksys_ioctl+0x67/0x90
Dec  8 16:48:45 moldycrow kernel: [61748.244413]  __x64_sys_ioctl+0x1a/0x20
Dec  8 16:48:45 moldycrow kernel: [61748.244415]  do_syscall_64+0x57/0x190
Dec  8 16:48:45 moldycrow kernel: [61748.244417]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Dec  8 16:48:45 moldycrow kernel: [61748.244418] RIP: 0033:0x7fb1bede3427
Dec  8 16:48:45 moldycrow kernel: [61748.244420] Code: Bad RIP value.
Dec  8 16:48:45 moldycrow kernel: [61748.244421] RSP: 002b:00007ffefb4bf528 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Dec  8 16:48:45 moldycrow kernel: [61748.244423] RAX: ffffffffffffffda RBX: 00007ffefb4bf6d0 RCX: 00007fb1bede3427
Dec  8 16:48:45 moldycrow kernel: [61748.244423] RDX: 00007ffefb4bf6d0 RSI: 0000000000005a1b RDI: 0000000000000006
Dec  8 16:48:45 moldycrow kernel: [61748.244424] RBP: 00007ffefb4c3cc0 R08: 0000000000000003 R09: 00007fb1beeaeda0
Dec  8 16:48:45 moldycrow kernel: [61748.244425] R10: 0000557485a05010 R11: 0000000000000246 R12: 00007ffefb4c2c80
Dec  8 16:48:45 moldycrow kernel: [61748.244426] R13: 00007ffefb4ce2a8 R14: 0000557485a08ca0 R15: 00007ffefb4c9b40
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!