Deployment Architecture

How to build a dataset far back in time with incremental job process

steinroardahl
Observer

Hi,

I admit, it sounds like a standard search procedure, but I save aggregated data in a csv file. At the same time, I do not want to build the dataset in a long search due to large resource consumption.

I would like to carry out the procedure as follows:
1. Define a start date
2. Define an end date
3. Define a step by step search, let's say 5 minute steps

The process will start with an empty csv file.
First run searches up the information from start date and 5 minutes back in time. The information is written to the csv file with the append and the outputlookup command.
Next run starts from the start date - 5 minutes. Finding information appends this to the csv file.
Next run starts from start -10 minutes. Finding information ..

As you can see, I want to build a dataset with a step-by-step process from a given start date to an end date back in time.

How do I do this?

Tags (1)
0 Karma
Get Updates on the Splunk Community!

Observe and Secure All Apps with Splunk

  Join Us for Our Next Tech Talk: Observe and Secure All Apps with SplunkAs organizations continue to innovate ...

Splunk Decoded: Business Transactions vs Business IQ

It’s the morning of Black Friday, and your e-commerce site is handling 10x normal traffic. Orders are flowing, ...

Fastest way to demo Observability

I’ve been having a lot of fun learning about Kubernetes and Observability. I set myself an interesting ...