AULTING_MODULE: fffff80133220000 vioscsi

CH.illig

Renowned Member
Feb 20, 2016
73
10
73
33
Switzerland
it-statistik.ch
I have a Windows Server that crash every view weeks
Virtio Guest agent and driver was installed with 0.1.252 and now 0.1.262 (in this Log)

on the memory dump, i got the following error:
Code:
9: kd> !analyze -v
*******************************************************************************
*                                                                             *
*                        Bugcheck Analysis                                    *
*                                                                             *
*******************************************************************************

KERNEL_DATA_INPAGE_ERROR (7a)
The requested page of kernel data could not be read in.  Typically caused by
a bad block in the paging file or disk controller error. Also see
KERNEL_STACK_INPAGE_ERROR.
If the error status is 0xC000000E, 0xC000009C, 0xC000009D or 0xC0000185,
it means the disk subsystem has experienced a failure.
If the error status is 0xC000009A, then it means the request failed because
a filesystem failed to make forward progress.
Arguments:
Arg1: fffff3eb066f3340, lock type that was held (value 1,2,3, or PTE address)
Arg2: ffffffffc0000185, error status (normally i/o status code)
Arg3: 0000200001e91be0, current process (virtual address for lock type 3, or PTE)
Arg4: ffffd60cde668000, virtual address that could not be in-paged (or PTE contents if arg1 is a PTE address)

Debugging Details:
------------------

Unable to load image \SystemRoot\System32\drivers\vioscsi.sys, Win32 error 0n2

KEY_VALUES_STRING: 1

    Key  : Analysis.CPU.mSec
    Value: 1218

    Key  : Analysis.Elapsed.mSec
    Value: 2406

    Key  : Analysis.IO.Other.Mb
    Value: 5

    Key  : Analysis.IO.Read.Mb
    Value: 1

    Key  : Analysis.IO.Write.Mb
    Value: 29

    Key  : Analysis.Init.CPU.mSec
    Value: 109

    Key  : Analysis.Init.Elapsed.mSec
    Value: 19067

    Key  : Analysis.Memory.CommitPeak.Mb
    Value: 98

    Key  : Analysis.Version.DbgEng
    Value: 10.0.27725.1000

    Key  : Analysis.Version.Description
    Value: 10.2408.27.01 amd64fre

    Key  : Analysis.Version.Ext
    Value: 1.2408.27.1

    Key  : Bugcheck.Code.KiBugCheckData
    Value: 0x7a

    Key  : Bugcheck.Code.LegacyAPI
    Value: 0x7a

    Key  : Bugcheck.Code.TargetModel
    Value: 0x7a

    Key  : Failure.Bucket
    Value: 0x7a_c0000185_DUMP_VIOSCSI

    Key  : Failure.Hash
    Value: {f5096b16-2043-7702-792c-bfca7413f754}

    Key  : Hypervisor.Enlightenments.Value
    Value: 16752

    Key  : Hypervisor.Enlightenments.ValueHex
    Value: 4170

    Key  : Hypervisor.Flags.AnyHypervisorPresent
    Value: 1

    Key  : Hypervisor.Flags.ApicEnlightened
    Value: 1

    Key  : Hypervisor.Flags.ApicVirtualizationAvailable
    Value: 0

    Key  : Hypervisor.Flags.AsyncMemoryHint
    Value: 0

    Key  : Hypervisor.Flags.CoreSchedulerRequested
    Value: 0

    Key  : Hypervisor.Flags.CpuManager
    Value: 0

    Key  : Hypervisor.Flags.DeprecateAutoEoi
    Value: 0

    Key  : Hypervisor.Flags.DynamicCpuDisabled
    Value: 0

    Key  : Hypervisor.Flags.Epf
    Value: 0

    Key  : Hypervisor.Flags.ExtendedProcessorMasks
    Value: 1

    Key  : Hypervisor.Flags.HardwareMbecAvailable
    Value: 0

    Key  : Hypervisor.Flags.MaxBankNumber
    Value: 0

    Key  : Hypervisor.Flags.MemoryZeroingControl
    Value: 0

    Key  : Hypervisor.Flags.NoExtendedRangeFlush
    Value: 1

    Key  : Hypervisor.Flags.NoNonArchCoreSharing
    Value: 0

    Key  : Hypervisor.Flags.Phase0InitDone
    Value: 1

    Key  : Hypervisor.Flags.PowerSchedulerQos
    Value: 0

    Key  : Hypervisor.Flags.RootScheduler
    Value: 0

    Key  : Hypervisor.Flags.SynicAvailable
    Value: 1

    Key  : Hypervisor.Flags.UseQpcBias
    Value: 0

    Key  : Hypervisor.Flags.Value
    Value: 536745

    Key  : Hypervisor.Flags.ValueHex
    Value: 830a9

    Key  : Hypervisor.Flags.VpAssistPage
    Value: 1

    Key  : Hypervisor.Flags.VsmAvailable
    Value: 0

    Key  : Hypervisor.RootFlags.AccessStats
    Value: 0

    Key  : Hypervisor.RootFlags.CrashdumpEnlightened
    Value: 0

    Key  : Hypervisor.RootFlags.CreateVirtualProcessor
    Value: 0

    Key  : Hypervisor.RootFlags.DisableHyperthreading
    Value: 0

    Key  : Hypervisor.RootFlags.HostTimelineSync
    Value: 0

    Key  : Hypervisor.RootFlags.HypervisorDebuggingEnabled
    Value: 0

    Key  : Hypervisor.RootFlags.IsHyperV
    Value: 0

    Key  : Hypervisor.RootFlags.LivedumpEnlightened
    Value: 0

    Key  : Hypervisor.RootFlags.MapDeviceInterrupt
    Value: 0

    Key  : Hypervisor.RootFlags.MceEnlightened
    Value: 0

    Key  : Hypervisor.RootFlags.Nested
    Value: 0

    Key  : Hypervisor.RootFlags.StartLogicalProcessor
    Value: 0

    Key  : Hypervisor.RootFlags.Value
    Value: 0

    Key  : Hypervisor.RootFlags.ValueHex
    Value: 0

    Key  : SecureKernel.HalpHvciEnabled
    Value: 0

    Key  : WER.DumpDriver
    Value: DUMP_VIOSCSI

    Key  : WER.OS.Branch
    Value: fe_release_svc_prod2

    Key  : WER.OS.Version
    Value: 10.0.20348.859


BUGCHECK_CODE:  7a

BUGCHECK_P1: fffff3eb066f3340

BUGCHECK_P2: ffffffffc0000185

BUGCHECK_P3: 200001e91be0

BUGCHECK_P4: ffffd60cde668000

FILE_IN_CAB:  MEMORY.DMP

FAULTING_THREAD:  ffffdb08d0144040

ERROR_CODE: (NTSTATUS) 0xc0000185 - Das E/A-Ger t hat einen E/A-Fehler gemeldet.

IMAGE_NAME:  vioscsi.sys

MODULE_NAME: vioscsi

FAULTING_MODULE: fffff80133220000 vioscsi

DISK_HARDWARE_ERROR: There was error with disk hardware

BLACKBOXBSD: 1 (!blackboxbsd)


BLACKBOXNTFS: 1 (!blackboxntfs)


BLACKBOXPNP: 1 (!blackboxpnp)


BLACKBOXWINLOGON: 1

PROCESS_NAME:  System

STACK_TEXT:
ffffd60c`db9626d8 fffff801`308be399     : 00000000`0000007a fffff3eb`066f3340 ffffffff`c0000185 00002000`01e91be0 : nt!KeBugCheckEx
ffffd60c`db9626e0 fffff801`306f87e5     : ffffd60c`00000000 ffffd60c`db962800 ffffd60c`db962838 fffff3f9`00000000 : nt!MiWaitForInPageComplete+0x1c5039
ffffd60c`db9627e0 fffff801`306e9a9d     : 00000000`c0033333 00000000`00000000 ffffd60c`de668000 ffffd60c`de668000 : nt!MiIssueHardFault+0x1d5
ffffd60c`db962890 fffff801`307508a1     : 00000000`00000000 00000000`00000000 ffffdb08`e1050080 00000000`00000000 : nt!MmAccessFault+0x35d
ffffd60c`db962a30 fffff801`30751ed2     : ffffd60c`00000000 00000000`00000000 ffffdb08`e1050080 00000000`00000000 : nt!MiInPageSingleKernelStack+0x28d
ffffd60c`db962c80 fffff801`307c59ea     : 00000000`00000000 00000000`00000000 fffff801`307c5900 fffff801`3105ed00 : nt!KiInSwapKernelStacks+0x4e
ffffd60c`db962cf0 fffff801`306757d5     : ffffdb08`d0144040 fffff801`307c5970 fffff801`3105ed00 a350a348`a340a338 : nt!KeSwapProcessOrStack+0x7a
ffffd60c`db962d30 fffff801`30825548     : ffff9180`893c5180 ffffdb08`d0144040 fffff801`30675780 a4b0a4a8`a4a0a498 : nt!PspSystemThreadStartup+0x55
ffffd60c`db962d80 00000000`00000000     : ffffd60c`db963000 ffffd60c`db95d000 00000000`00000000 00000000`00000000 : nt!KiStartSystemThread+0x28


STACK_COMMAND:  .process /r /p 0xffffdb08d00c8080; .thread 0xffffdb08d0144040 ; kb

FAILURE_BUCKET_ID:  0x7a_c0000185_DUMP_VIOSCSI

OS_VERSION:  10.0.20348.859

BUILDLAB_STR:  fe_release_svc_prod2

OSPLATFORM_TYPE:  x64

OSNAME:  Windows 10

FAILURE_ID_HASH:  {f5096b16-2043-7702-792c-bfca7413f754}

Followup:     MachineOwner
---------

There are Other Server on this host whitout anny issue.
The DIsk itself looks good
1732001711341.png
and ZFS also reports no error
1732001739457.png

on windows i noticed there are some warnings
1732001988438.png




and just my feeling i think its related to the zfs replication
 
Last edited:
Code:
2024-11-19 08:30:04 63254-0: start replication job
2024-11-19 08:30:04 63254-0: guest => VM 63254, running => 2481
2024-11-19 08:30:04 63254-0: volumes => local-zfs:vm-63254-disk-0,local-zfs:vm-63254-disk-1,local-zfs:vm-63254-disk-2,local-zfs:vm-63254-disk-3,local-zfs:vm-63254-disk-4,local-zfs:vm-63254-disk-5
2024-11-19 08:30:07 63254-0: freeze guest filesystem
2024-11-19 08:30:16 63254-0: create snapshot '__replicate_63254-0_1732001404__' on local-zfs:vm-63254-disk-0
2024-11-19 08:30:16 63254-0: create snapshot '__replicate_63254-0_1732001404__' on local-zfs:vm-63254-disk-1
2024-11-19 08:30:16 63254-0: create snapshot '__replicate_63254-0_1732001404__' on local-zfs:vm-63254-disk-2
2024-11-19 08:30:16 63254-0: create snapshot '__replicate_63254-0_1732001404__' on local-zfs:vm-63254-disk-3
2024-11-19 08:30:16 63254-0: create snapshot '__replicate_63254-0_1732001404__' on local-zfs:vm-63254-disk-4
2024-11-19 08:30:16 63254-0: create snapshot '__replicate_63254-0_1732001404__' on local-zfs:vm-63254-disk-5
2024-11-19 08:30:16 63254-0: thaw guest filesystem
2024-11-19 08:30:18 63254-0: using secure transmission, rate limit: none
2024-11-19 08:30:18 63254-0: incremental sync 'local-zfs:vm-63254-disk-0' (__replicate_63254-0_1731999603__ => __replicate_63254-0_1732001404__)
2024-11-19 08:30:19 63254-0: send from @__replicate_63254-0_1731999603__ to rpool/data/vm-63254-disk-0@__replicate_63254-0_1732001404__ estimated size is 97.7K
2024-11-19 08:30:19 63254-0: total estimated size is 97.7K
2024-11-19 08:30:19 63254-0: TIME        SENT   SNAPSHOT rpool/data/vm-63254-disk-0@__replicate_63254-0_1732001404__
2024-11-19 08:30:19 63254-0: successfully imported 'local-zfs:vm-63254-disk-0'
2024-11-19 08:30:19 63254-0: incremental sync 'local-zfs:vm-63254-disk-1' (__replicate_63254-0_1731999603__ => __replicate_63254-0_1732001404__)
2024-11-19 08:30:20 63254-0: send from @__replicate_63254-0_1731999603__ to rpool/data/vm-63254-disk-1@__replicate_63254-0_1732001404__ estimated size is 5.94G
2024-11-19 08:30:20 63254-0: total estimated size is 5.94G
2024-11-19 08:30:20 63254-0: TIME        SENT   SNAPSHOT rpool/data/vm-63254-disk-1@__replicate_63254-0_1732001404__
2024-11-19 08:30:21 63254-0: 08:30:21   86.8M   rpool/data/vm-63254-disk-1@__replicate_63254-0_1732001404__
2024-11-19 08:30:22 63254-0: 08:30:22    199M   rpool/data/vm-63254-disk-1@__replicate_63254-0_1732001404__
2024-11-19 08:30:23 63254-0: 08:30:23    311M   rpool/data/vm-63254-disk-1@__replicate_63254-0_1732001404__
2024-11-19 08:30:24 63254-0: 08:30:24    423M   rpool/data/vm-63254-disk-1@__replicate_63254-0_1732001404__
2024-11-19 08:30:25 63254-0: 08:30:25    535M   rpool/data/vm-63254-disk-1@__replicate_63254-0_1732001404__
2024-11-19 08:30:26 63254-0: 08:30:26    647M   rpool/data/vm-63254-disk-1@__replicate_63254-0_1732001404__
...
2024-11-19 08:31:10 63254-0: 08:31:10   5.44G   rpool/data/vm-63254-disk-1@__replicate_63254-0_1732001404__
2024-11-19 08:31:11 63254-0: 08:31:11   5.55G   rpool/data/vm-63254-disk-1@__replicate_63254-0_1732001404__
2024-11-19 08:31:12 63254-0: 08:31:12   5.66G   rpool/data/vm-63254-disk-1@__replicate_63254-0_1732001404__
2024-11-19 08:31:13 63254-0: 08:31:13   5.77G   rpool/data/vm-63254-disk-1@__replicate_63254-0_1732001404__
2024-11-19 08:31:14 63254-0: 08:31:14   5.88G   rpool/data/vm-63254-disk-1@__replicate_63254-0_1732001404__
2024-11-19 08:31:16 63254-0: successfully imported 'local-zfs:vm-63254-disk-1'
2024-11-19 08:31:16 63254-0: incremental sync 'local-zfs:vm-63254-disk-2' (__replicate_63254-0_1731999603__ => __replicate_63254-0_1732001404__)
2024-11-19 08:31:16 63254-0: send from @__replicate_63254-0_1731999603__ to rpool/data/vm-63254-disk-2@__replicate_63254-0_1732001404__ estimated size is 5.01G
2024-11-19 08:31:16 63254-0: total estimated size is 5.01G
2024-11-19 08:31:17 63254-0: TIME        SENT   SNAPSHOT rpool/data/vm-63254-disk-2@__replicate_63254-0_1732001404__
2024-11-19 08:31:18 63254-0: 08:31:18   90.8M   rpool/data/vm-63254-disk-2@__replicate_63254-0_1732001404__
2024-11-19 08:31:19 63254-0: 08:31:19    203M   rpool/data/vm-63254-disk-2@__replicate_63254-0_1732001404__
2024-11-19 08:31:20 63254-0: 08:31:20    315M   rpool/data/vm-63254-disk-2@__replicate_63254-0_1732001404__
2024-11-19 08:31:21 63254-0: 08:31:21    427M   rpool/data/vm-63254-disk-2@__replicate_63254-0_1732001404__
...
2024-11-19 08:31:57 63254-0: 08:31:57   4.35G   rpool/data/vm-63254-disk-2@__replicate_63254-0_1732001404__
2024-11-19 08:31:58 63254-0: 08:31:58   4.46G   rpool/data/vm-63254-disk-2@__replicate_63254-0_1732001404__
2024-11-19 08:31:59 63254-0: 08:31:59   4.57G   rpool/data/vm-63254-disk-2@__replicate_63254-0_1732001404__
2024-11-19 08:32:00 63254-0: 08:32:00   4.68G   rpool/data/vm-63254-disk-2@__replicate_63254-0_1732001404__
2024-11-19 08:32:01 63254-0: 08:32:01   4.79G   rpool/data/vm-63254-disk-2@__replicate_63254-0_1732001404__
2024-11-19 08:32:02 63254-0: 08:32:02   4.90G   rpool/data/vm-63254-disk-2@__replicate_63254-0_1732001404__
2024-11-19 08:33:12 63254-0: successfully imported 'local-zfs:vm-63254-disk-2'
2024-11-19 08:33:12 63254-0: incremental sync 'local-zfs:vm-63254-disk-3' (__replicate_63254-0_1731999603__ => __replicate_63254-0_1732001404__)
2024-11-19 08:33:13 63254-0: send from @__replicate_63254-0_1731999603__ to rpool/data/vm-63254-disk-3@__replicate_63254-0_1732001404__ estimated size is 652K
2024-11-19 08:33:13 63254-0: total estimated size is 652K
2024-11-19 08:33:13 63254-0: TIME        SENT   SNAPSHOT rpool/data/vm-63254-disk-3@__replicate_63254-0_1732001404__
2024-11-19 08:33:13 63254-0: successfully imported 'local-zfs:vm-63254-disk-3'
2024-11-19 08:33:13 63254-0: incremental sync 'local-zfs:vm-63254-disk-4' (__replicate_63254-0_1731999603__ => __replicate_63254-0_1732001404__)
2024-11-19 08:33:14 63254-0: send from @__replicate_63254-0_1731999603__ to rpool/data/vm-63254-disk-4@__replicate_63254-0_1732001404__ estimated size is 7.37M
2024-11-19 08:33:14 63254-0: total estimated size is 7.37M
2024-11-19 08:33:14 63254-0: TIME        SENT   SNAPSHOT rpool/data/vm-63254-disk-4@__replicate_63254-0_1732001404__
2024-11-19 08:33:14 63254-0: successfully imported 'local-zfs:vm-63254-disk-4'
2024-11-19 08:33:14 63254-0: incremental sync 'local-zfs:vm-63254-disk-5' (__replicate_63254-0_1731999603__ => __replicate_63254-0_1732001404__)
2024-11-19 08:33:15 63254-0: send from @__replicate_63254-0_1731999603__ to rpool/data/vm-63254-disk-5@__replicate_63254-0_1732001404__ estimated size is 45.0K
2024-11-19 08:33:15 63254-0: total estimated size is 45.0K
2024-11-19 08:33:15 63254-0: TIME        SENT   SNAPSHOT rpool/data/vm-63254-disk-5@__replicate_63254-0_1732001404__
2024-11-19 08:33:15 63254-0: successfully imported 'local-zfs:vm-63254-disk-5'
2024-11-19 08:33:15 63254-0: delete previous replication snapshot '__replicate_63254-0_1731999603__' on local-zfs:vm-63254-disk-0
2024-11-19 08:33:15 63254-0: delete previous replication snapshot '__replicate_63254-0_1731999603__' on local-zfs:vm-63254-disk-1
2024-11-19 08:33:16 63254-0: delete previous replication snapshot '__replicate_63254-0_1731999603__' on local-zfs:vm-63254-disk-2
2024-11-19 08:33:16 63254-0: delete previous replication snapshot '__replicate_63254-0_1731999603__' on local-zfs:vm-63254-disk-3
2024-11-19 08:33:16 63254-0: delete previous replication snapshot '__replicate_63254-0_1731999603__' on local-zfs:vm-63254-disk-4
2024-11-19 08:33:16 63254-0: delete previous replication snapshot '__replicate_63254-0_1731999603__' on local-zfs:vm-63254-disk-5
2024-11-19 08:33:18 63254-0: (remote_finalize_local_job) delete stale replication snapshot '__replicate_63254-0_1731999603__' on local-zfs:vm-63254-disk-0
2024-11-19 08:33:18 63254-0: (remote_finalize_local_job) delete stale replication snapshot '__replicate_63254-0_1731999603__' on local-zfs:vm-63254-disk-1
2024-11-19 08:33:18 63254-0: (remote_finalize_local_job) delete stale replication snapshot '__replicate_63254-0_1731999603__' on local-zfs:vm-63254-disk-2
2024-11-19 08:33:18 63254-0: (remote_finalize_local_job) delete stale replication snapshot '__replicate_63254-0_1731999603__' on local-zfs:vm-63254-disk-3
2024-11-19 08:33:18 63254-0: (remote_finalize_local_job) delete stale replication snapshot '__replicate_63254-0_1731999603__' on local-zfs:vm-63254-disk-4
2024-11-19 08:33:18 63254-0: (remote_finalize_local_job) delete stale replication snapshot '__replicate_63254-0_1731999603__' on local-zfs:vm-63254-disk-5
2024-11-19 08:33:18 63254-0: end replication job
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!