Splunk Search

Search Job inspector - dispatch.fetch slow - why?

cpenkert
Path Finder

I'm working on tracking down some slowness in searches of all types that I am doing. Looking at the search inspector for some of these, I consistently find that dispatch.fetch takes up the vast majority of the overall search time.

I've looked at this document http://docs.splunk.com/Documentation/Splunk/5.0.1/Search/UsingtheSearchJobInspector, however, I'm looking for some insight on how to troubleshoot the long dispatch.fetch time specifically.

Any insight?

Tags (2)
1 Solution

dwaddle
SplunkTrust
SplunkTrust

Fetch is mostly waiting on events to be pulled back from disk. I would be checking to see if there is a lot of I/O contention. Also, it could be representative of how your searches are structured versus your data. A dense search (where a large proportion of events in the index match your search terms) will necessarily have more fetching to do than a sparse one.

View solution in original post

dwaddle
SplunkTrust
SplunkTrust

Fetch is mostly waiting on events to be pulled back from disk. I would be checking to see if there is a lot of I/O contention. Also, it could be representative of how your searches are structured versus your data. A dense search (where a large proportion of events in the index match your search terms) will necessarily have more fetching to do than a sparse one.

koshyk
Super Champion

how to check I/O contention? ( I tried SOS, but not showing)

0 Karma

basilarockiaedw
Path Finder

I am connecting my hunk application(6.4) to datastax cassandra 3.1 to get the results for monitoring and the results took consistently 5 seconds to render though the table has data in hundreds.
I have verified my CassandraERP Connector class also which is hardly taking time in mili seconds to return the response.Could anyone help me in getting this clarified.

Execution costs
Duration (seconds) Component Invocations Input count Output count
0.00 command.fields 4 1 1
0.00 command.search 4 1 1
0.00 command.search.filter 4 - -
2.02 command.stdin 3 - 1
2.00 command.stdin.cpd2sr 2 1 1
0.00 command.stdin.calcfields 1 1 1
2.00 command.stdin.cpd2sr.blocked 1 - -
0.00 command.stdin.kv 1 1 1
0.00 command.stdin.tags 1 1 1
0.00 command.stdin.typer 1 1 1
0.00 command.stdin.fieldalias 1 1 1
0.00 command.stdin.lookups 1 1 1
0.00 dispatch.check_disk_usage 1 - -
0.06 dispatch.createdSearchResultInfrastructure 1 - -
0.04 dispatch.evaluate 1 - -
0.04 dispatch.evaluate.search 1 - -
4.08 dispatch.fetch 6 - -
0.00 dispatch.localSearch 1 - -
0.00 dispatch.preview 1 - -
0.00 dispatch.readEventsInResults 1 - -
0.00 dispatch.stream.local 1 - -
0.00 dispatch.timeline 6 - -
0.03 dispatch.writeStatus 8 - -
0.01 startup.configuration 1 - -
0.03 startup.handoff 1 - -

0 Karma

cpenkert
Path Finder

Thanks - the disk I/O contention comment led me to find a bad network storage mount parameter on the index server that was causing the disk to be very busy.

Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...