Splunk Search

Trying to performance test my Splunk searches, but why are results sometimes returned too fast?

bhawkins1
Communicator

Hello,

I have a report which generates results - useful for loading with | loadjob, as well as events into the summary index, useful for streaming with search index=summary search_name=x.

I'm making a dashboard that will sometimes stream events, and sometimes use the full report results, and I need to record some metrics to help my team estimate the cost of each, in order to make the right decision when deciding between loadjob and search for each panel of that dashboard.

The problem is that I can't get consistent metrics because if I run a search like:

index=summary search_name=bar user_group=a
| stats count by user_*

And then a search like:

index=summary search_name=bar user_group=a user_class=admin
| stats count by user_*

Then the first search will cause the second one to run much faster - e.g. 180 seconds, then 0.5 seconds.

Clearly having fast filtering is great since some dashboard panels might have drill-down behavior, but this is a nuisance to deal with during testing.

My question is two-fold:

  • Where can I read about this behavior where secondary searches are somehow performance-boosted?
  • Is it possible to disable this feature so I can get consistent metrics?

Best guess is that, since I have a search-based, accelerated data model over index=summary search_name=bar, the first search is causing the second searches to use that accelerated model. Perhaps someone can confirm or discard this theory?

Kind Regards,

0 Karma

woodcock
Esteemed Legend

Read this:
https://docs.splunk.com/Documentation/Splunk/6.5.2/Search/Quicktipsforoptimization

To turn off optimization for a specific search, the last command in your search criteria should be

|noop search_optimization=false

But this may defeat the whole purpose for your comparison (no longer comparing apples to oranges: now comparing watermelons to watermelons).

niketn
Legend

Ideally, your second search should always run faster than the first search as there are more filters specially on the field being used in the stats command. May I know how many user_ fields are present in the index being searched?

Following is the better expression for second search provided there are only two user_ fields:
index=summary search_name=bar user_group=a user_class=admin
| stats count by user_group user_class

Even though you have not filtered for specific user_class in the first search it is better to add user_class=* condition in the base search to return only records which have user_class on which you are preparing the stats (this will remove NULL fields upfront and run faster since filter criteria is present in the base search).

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"

bhawkins1
Communicator

Hi, thanks for the comment, but this isn't actually related to the question. If I do my first query with 4 filters, then second query with 2 filters, it sometimes manifests in long first query and still fast (sub-second) second query, even with fewer filters. It seems to me that splunk stores the events queried in memory.

0 Karma

jmallorquin
Builder

Hi,

You can wait fot he job is deleted, by default i think is 10 min.

Regards,

bhawkins1
Communicator

Thanks for the suggestion. I will do this for testing purposes. It would be nice to have answers to know how this works so we can either take advantage of it in the future, or avoid it when necessary.

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...