Knowledge Management

How to optimize a large static historical search by getting cached results from the past and recalculating new deltas?

mgaraventa_splu
Splunk Employee
Splunk Employee

I want to run a simple search counting total number of events over a time duration such earliest = -6 months, latest = now.

Say I want to run this search on a daily basis, but obviously I don't need the past 6 months to be calculated and regenerated each time because each consecutive search is just going to add a small delta to the entire search, namely, 1 new days worth of data.

Is there a way for me to optimize this search or use some other Splunk functionality in order to get cached results from the past and just recalculate the new deltas?

Thanks.

1 Solution

mgaraventa_splu
Splunk Employee
Splunk Employee

This can be solved by following one of the 3 possible approaches listed in this documentation article:

http://docs.splunk.com/Documentation/Splunk/6.2.1/Knowledge/Aboutsummaryindexing

i.e.

  1. Report acceleration - Uses automatically-created summaries to speed up completion times for certain kinds of reports.
  2. Data model acceleration - Uses automatically-created summaries to speed up completion times for pivots.
  3. Summary indexing - Enables acceleration of searches and reports through the manual creation of separate summary indexes that exist separately from your main indexes.

Hope this helps.

View solution in original post

mgaraventa_splu
Splunk Employee
Splunk Employee

This can be solved by following one of the 3 possible approaches listed in this documentation article:

http://docs.splunk.com/Documentation/Splunk/6.2.1/Knowledge/Aboutsummaryindexing

i.e.

  1. Report acceleration - Uses automatically-created summaries to speed up completion times for certain kinds of reports.
  2. Data model acceleration - Uses automatically-created summaries to speed up completion times for pivots.
  3. Summary indexing - Enables acceleration of searches and reports through the manual creation of separate summary indexes that exist separately from your main indexes.

Hope this helps.

Get Updates on the Splunk Community!

Enter the Agentic Era with Splunk AI Assistant for SPL 1.4

  🚀 Your data just got a serious AI upgrade — are you ready? Say hello to the Agentic Era with the ...

Stronger Security with Federated Search for S3, GCP SQL & Australian Threat ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Accelerating Observability as Code with the Splunk AI Assistant

We’ve seen in previous posts what Observability as Code (OaC) is and how it’s now essential for managing ...