Reporting

scheduled reports in order

jperezes
Path Finder

Hi,
I need to create a some searches, one of them dependant than the other, and save the result in csv file.

The idea is:

  1. Make a search for the last 24 h and save it to a document.
  2. Append the daily document to the historical one.
  3. Remove duplicates in the resulting document.

These three actions should be consecutive in the schedule search, that means seconds need the first to finish, 3 need the second the finish to start.

I have been looking at the documentation but there is no much on that on Splunk 6.3.

Regards and thanks in advance.

juan

Tags (1)
0 Karma

somesoni2
Revered Legend

If I understand your requirement correctly, you can achieve all three things in one search itself. See this run anywhere sample. Assume you want to create a report of index and sourcetype mapping (which index has which sourcetype) and you want to store this information in a lookup csv file called index_to_sourcetype.csv .

Schedule this search to run daily, may be a 1 AM, with looking data for yesterday (-1d@d to @d)
Search:

| inputlookup index_to_sourcetype.csv 
| append [| tstats count WHERE index=* by index, sourcetype | table index, sourcetype  ] 
| stats count by index sourcetype | table index sourcetype | outputlookup index_to_sourcetype.csv

Here,

Line 1 selects the historical data from CSV
Line 2 appends the latest data to historical data
Line 3 removes the duplicate and updates the historical data.
0 Karma

martin_mueller
SplunkTrust
SplunkTrust

For simplicity's sake, I'd swap the subsearch for an appending inputlookup:

| tstats ...
| inputlookup append=t ...
| stats ...

Additionally, when keeping state in a lookup file it's usually a good idea to include some kind of timestamp, either event or search time. Then you can amend your lookup updating search to clean out really old data and avoid the file growing indefinitely. Some insights: http://blogs.splunk.com/2011/01/11/maintaining-state-of-the-union/

0 Karma

jperezes
Path Finder

Thank you both, much appreciated, so if I understood well, the final search would be something like:

| inputlookup index_to_sourcetype.csv
| tstats count WHERE index=* by index, sourcetype earliest earliest=@d-24h | table index, sourcetype
| inputlookup append=t index_to_sourcetype.csv
|stats count by index sourcetype | table index sourcetype | outputlookup index_to_sourcetype.csv

with the earliest to get just last day data, by now don't thing the file is going to grow crazy.

Kind Regards,

Juan

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Character substitutions with Regular Expressions

This challenge was first posted on Slack #puzzles channelFor BORE at .conf23, we had a puzzle question which ...

Splunk Community Badges!

  Hey everyone! Ready to earn some serious bragging rights in the community? Along with our existing badges ...

[Puzzles] Solve, Learn, Repeat: Matching cron expressions

This puzzle (first published here) is based on matching timestamps to cron expressions.All the timestamps ...