Deployment Architecture

How to build a dataset far back in time with incremental job process

steinroardahl
Observer

Hi,

I admit, it sounds like a standard search procedure, but I save aggregated data in a csv file. At the same time, I do not want to build the dataset in a long search due to large resource consumption.

I would like to carry out the procedure as follows:
1. Define a start date
2. Define an end date
3. Define a step by step search, let's say 5 minute steps

The process will start with an empty csv file.
First run searches up the information from start date and 5 minutes back in time. The information is written to the csv file with the append and the outputlookup command.
Next run starts from the start date - 5 minutes. Finding information appends this to the csv file.
Next run starts from start -10 minutes. Finding information ..

As you can see, I want to build a dataset with a step-by-step process from a given start date to an end date back in time.

How do I do this?

Tags (1)
0 Karma
Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...