Hi everyone,
I have strange Splunk behavior regarding one of the indexes but first a little bit of background:
The issue:
When I'm using the search 1 (with field "host") in fast mode it is 10 to 20 times slower than using search 2.
Search 1
index=my_index sourcetype=cisco:wsa:squid| fields _time, _indextime, source, sourcetype, host, index, splunk_server, _raw
Search 2
index=my_index sourcetype=cisco:wsa:squid| fields _time, _indextime, source, sourcetype, index, splunk_server, _raw
I have already reviewed full configuration an there is no configuration on any of the instances that is modifying field "host" in any way and when I use it in my search it is drastically slower which is causing issues further down the line.
This issue does not manifest on other indexes. All indexes are configured with same options in indexes.conf
Hope someone can give me a good clue for troubleshooting.
Adding side-by-side view of search performance
Inspect both jobs and see what the difference is because it's counterintuitive. Especially that host is an indexed field.
Haven't seen any events in search.log that would explain this behavior. No errors or warnings.
Execution time analysis shows only longer times in "dispatch.stream.remote" section (fetching data from indexers).
Data is equally balanced across cluster so it is not the issue of single node.
That's very unusual. The only explanation that comes to mind is that it's not connected in any way to the search itself, it's just that you've hit the search number limit and had to wait for "free" search peers. And only accidentaly it correlated to a change in your search. But if it's repeatable (every time adding the host field to the search results in this long search and without it the search runs quickly), that explanation would have to be wrong.
It is repeatable and only manifests on this index when I use field "host". In all other cases searches run normally and "fast".