Splunk Search

Slow search when using field "host"

dalbreht
Observer

Hi everyone,

I have strange Splunk behavior regarding one of the indexes but first a little bit of background:

  1. Environment is indexer cluster with 1 SH
  2. Proxy logs are getting ingested from syslog server via universal forwarder (monitor input)
  3. Monitor input uses host_segment option to extract host data
  4. Sourcetype is set to "cisco:wsa:squid" from splunkbase app "Splunk_TA_cisco-wsa".
  5. I'm not using any local configuration for that sourcetype (on any instance)
  6. There are no props.conf stanzas that apply configuration based on source or host (i.e. [host::something]) for this specific source or host

The issue:

When I'm using the search 1 (with field "host") in fast mode it is 10 to 20 times slower than using search 2.

Search 1

 

index=my_index sourcetype=cisco:wsa:squid| fields _time, _indextime, source, sourcetype, host, index, splunk_server, _raw

 

 

Search 2

 

index=my_index sourcetype=cisco:wsa:squid| fields _time, _indextime, source, sourcetype, index, splunk_server, _raw

 

I have already reviewed full configuration an there is no configuration on any of the instances that is modifying field "host" in any way and when I use it in my search it is drastically slower which is causing issues further down the line.

This issue does not manifest on other indexes. All indexes are configured with same options in indexes.conf

Hope someone can give me a good clue for troubleshooting.

Labels (2)
0 Karma

dalbreht
Observer

Adding side-by-side view of search performance

dalbreht_0-1634905179890.png

 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Inspect both jobs and see what the difference is because it's counterintuitive. Especially that host is an indexed field.

0 Karma

dalbreht
Observer

Haven't seen any events in search.log that would explain this behavior. No errors or warnings.

Execution time analysis shows only longer times in "dispatch.stream.remote" section (fetching data from indexers).

Data is equally balanced across cluster so it is not the issue of single node.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

That's very unusual. The only explanation that comes to mind is that it's not connected in any way to the search itself, it's just that you've hit the search number limit and had to wait for "free" search peers. And only accidentaly it correlated to a change in your search. But if it's repeatable (every time adding the host field to the search results in this long search and without it the search runs quickly), that explanation would have to be wrong.

0 Karma

dalbreht
Observer

It is repeatable  and only manifests on this index when I use field "host". In all other cases searches run normally and "fast".

 

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...