Splunk Search

How to avoid cache when performing benchmark searches?


I want to profile/benchmark a few different methods of searching, but sometimes Splunk hitting the search cache gets in the way of me getting true results. For instance, perhaps I want to do this search several times:

index=firewall src=

Splunk might take 80 seconds the first time, and then on subsequent runs, will only take 5 seconds. It's apparent that it's caching the data somewhere. To get an accurate test, I need to avoid that cache. (While I test, I'm running it multiple times, but that's not how it will be used in reality.)


That sounds like a storage-bound search so it sounds like you want to clear any storage-related caches. Depending on your storage system there might be more to this, but you can at least clear the page cache on Linux (which should be the fastest, largest cache you have) using the details here:

I think I would start by clearing the page cache before each run and only searching historical data. If you include data up to "now" you'll probably also need to run sync to write dirty pages to disk before clearing the page cache (maybe that's not a bad idea to do anyway since it shouldn't hurt anything). I'm also not sure if the performance of hot buckets can change during their lifetime, so I'd stick with warm/cold. Unfortunately, if you're on a SAN or some other storage with gigantic caches, you may still see the synthetically great performance unless you have some way to clear those caches as well.

You have me wondering about approaches to do detailed testing of index performance. It would be interesting to test different searches, index segmentation options, indexed field settings, etc. and be able to accurately determine how many I/Os are required to return results. On the other hand, at this point I have to imagine that a lot of customers are moving to flash and only the most performance-sensitive installations would need to optimize further. Interesting!

0 Karma


It would be interesting to test different searches, index segmentation options, indexed field settings, etc. and be able to accurately determine how many I/Os are required to return results.

This is almost exactly my use case. Firewall logs have so much repeated data that their cardinality is low (I think... I sometime get those reversed). I'm trying to determine what the best solution to make them faster is: index-time field extraction, data models, segmenters, etc.

0 Karma

Revered Legend

I'n not aware of any setting to disable caching but, you can update parameters so that caching expires faster. You might have to adjust the ttl setting for adhoc searches temporarily for your testing, and revert after you're done. The settings are available in limits.conf of the search head. Few of the important one will be

ttl = <integer>
* How long search artifacts should be stored on disk once completed, in
  seconds. The ttl is computed relative to the modtime of status.csv of the job
  if such file exists or the modtime of the search job's artifact directory. If
  a job is being actively viewed in the Splunk UI then the modtime of
  status.csv is constantly updated such that the reaper does not remove the job
  from underneath.
* Defaults to 600, which is equivalent to 10 minutes.

cache_ttl = <integer>
* The length of time to persist search cache entries (in seconds).
* Defaults to 300.

ttl = <integer>
* Time to cache a given subsearch's results, in seconds.
* Do not set this below 120 seconds.
* See definition in [search] ttl for more details on how the ttl is computed
* Defaults to 300.
0 Karma
Get Updates on the Splunk Community!

Splunk APM & RUM | Upcoming Planned Maintenance

There will be planned maintenance of Splunk APM’s and Splunk RUM’s streaming infrastructure in the coming ...

Part 2: Diving Deeper With AIOps

Getting the Most Out of Event Correlation and Alert Storm Detection in Splunk IT Service Intelligence   Watch ...

User Groups | Upcoming Events!

If by chance you weren't already aware, the Splunk Community is host to numerous User Groups, organized ...