Splunk Search

Is there any option in Splunk to run a search in a loop?

sunnyb147
Path Finder

Hi All, Good morning,
Is there any option in Splunk to run a search in a loop?

Basically what I want to say is I have a search which is producing some result in a tabular format and further I am piping that in a CSV file, a single iteration is working fine.

But I want to run that search lets say for 7 days based on date_mday, so I was wondering if there is a way like for loop kind of a thing so that every time when the search is executed it appends the output to a csv file.

Sample search:

index=test1 country=india 
| dedup txn-id| stats count(txn-id) as unique_txns by date_mday, date_month 
| table date_mday, date_month, unique_txns
| outputlookup append=true sunny_test.csv

Any help/guidance would be really appreciated.

Thanks,
Sunny

0 Karma
1 Solution

sunnyb147
Path Finder

Found the solution without using loop, and its working fine.

Instead of doing, dedup then counting the unique transactions.. counted distinct count of transactions.

View solution in original post

0 Karma

sunnyb147
Path Finder

Found the solution without using loop, and its working fine.

Instead of doing, dedup then counting the unique transactions.. counted distinct count of transactions.

View solution in original post

0 Karma

amitm05
Builder

Hi @sunnyb147

Can't exactly say why are you looking for a loop for the scenario you described. It more looks like you want to run it on a cron schedule.
In case there is a gap in understanding your logic, below is a sample of how you could use a loop -

| makeresults | eval  Feature.Flags.1 = "True", Feature.Flags.2 = "abc", Feature.Flags.3 = "" | eval HostFlags="" | foreach "Feature.Flags"* [eval HostFlags='<<FIELD>>'] | where HostFlags!="" | table Feature.Flags*

Also, take a look at - https://docs.splunk.com/Documentation/Splunk/7.3.0/SearchReference/Foreach
Hope this helps. Let me know.

0 Karma

sunnyb147
Path Finder

Hi @amitm05 ,

Thanks for your feedback, let me try explaining the scenario, basically what I am up-to:

  1. If I run above query , it will give me a unique count of all transaction-ids for a specific day which I select in time-picker.
  2. And if I add this thing in the cron schedule, it will append the file on daily basis.
  3. What I want is, to have a while loop or for loop in which if I pass this query then it should take the date via date_mday field and extract the results and further append that in the csv file.

The main issue here is if I use 7 days in time-picker and dedup the transaction-id it doesn't give me correct count reason being some transaction-ids are being duplicated over multiple dates.

Example: If I search for a unique count on 30th of June 2019 the count is 515 but if I select the tenure of 7 days it gives me a count of 483 for 30th of June 2019.

I tried using MAP and FOREACH but didn't got what I was looking for.

Thanks,
Sunny

0 Karma

amitm05
Builder

Let me know if this answers your query. Or if there is more to it yet. Please accept if you are ok with the answer. Thaks

0 Karma

skalliger
SplunkTrust
SplunkTrust

Where is the problem exactly? Create a scheduled search that runs every seven days with earliest=-7d@d AND latest=@d for example and that's it. Your outputlookup already uses the option append=true so the files should get appended.

Skalli

0 Karma

sunnyb147
Path Finder

Hi @skalliger,

Thanks for your feedback, The main issue here is if I use 7 days in time-picker or by earliest/latest and dedup the transaction-id it doesn't give me correct count reason being some transaction-ids are being duplicated over multiple dates.

Example: If I search for a unique count on 30th of June 2019 the count is 515 but if I select the tenure of 7 days it gives me a count of 483 for 30th of June 2019.

What I am looking for: Is to have a while loop or for loop in which if I pass this query then it should take the date via date_mday field and extract the results and further append that in the csv file.

Thanks,
Sunny

0 Karma