Knowledge Management

How to optimize a large static historical search by getting cached results from the past and recalculating new deltas?

mgaraventa_splu
Splunk Employee
Splunk Employee

I want to run a simple search counting total number of events over a time duration such earliest = -6 months, latest = now.

Say I want to run this search on a daily basis, but obviously I don't need the past 6 months to be calculated and regenerated each time because each consecutive search is just going to add a small delta to the entire search, namely, 1 new days worth of data.

Is there a way for me to optimize this search or use some other Splunk functionality in order to get cached results from the past and just recalculate the new deltas?

Thanks.

1 Solution

mgaraventa_splu
Splunk Employee
Splunk Employee

This can be solved by following one of the 3 possible approaches listed in this documentation article:

http://docs.splunk.com/Documentation/Splunk/6.2.1/Knowledge/Aboutsummaryindexing

i.e.

  1. Report acceleration - Uses automatically-created summaries to speed up completion times for certain kinds of reports.
  2. Data model acceleration - Uses automatically-created summaries to speed up completion times for pivots.
  3. Summary indexing - Enables acceleration of searches and reports through the manual creation of separate summary indexes that exist separately from your main indexes.

Hope this helps.

View solution in original post

mgaraventa_splu
Splunk Employee
Splunk Employee

This can be solved by following one of the 3 possible approaches listed in this documentation article:

http://docs.splunk.com/Documentation/Splunk/6.2.1/Knowledge/Aboutsummaryindexing

i.e.

  1. Report acceleration - Uses automatically-created summaries to speed up completion times for certain kinds of reports.
  2. Data model acceleration - Uses automatically-created summaries to speed up completion times for pivots.
  3. Summary indexing - Enables acceleration of searches and reports through the manual creation of separate summary indexes that exist separately from your main indexes.

Hope this helps.

Get Updates on the Splunk Community!

Developer Spotlight with Paul Stout

Welcome to our very first developer spotlight release series where we'll feature some awesome Splunk ...

State of Splunk Careers 2024: Maximizing Career Outcomes and the Continued Value of ...

For the past four years, Splunk has partnered with Enterprise Strategy Group to conduct a survey that gauges ...

Data-Driven Success: Splunk & Financial Services

Splunk streamlines the process of extracting insights from large volumes of data. In this fast-paced world, ...