Splunk Search

How to create a subsearch which would generate a csv file and use this as input for another search string at same point in time.

Veeruswathi
Explorer

Hi all,

I would like to generate the csv file form one search and use that as lookup file for another query .

Here the problem is i want the csv file to update the same time i run the second query !

Any ideas on this would be much appreciated.

Thanks,
Swathi

Tags (1)
0 Karma
1 Solution

Azeemering
Builder

Allright, so you want to create an inputlookup file I assume.

First create the inputlookup by running a base search/cache and write it to a csv file

Example:

tag=web url=*
| eval list="mozilla"
| `ut_parse_extended(url,list)`
| stats earliest(_time) as earliest latest(_time) as latest by ut_domain
| outputlookup previously_seen_domains.csv

Once you have the baseline, we can create a search that compares the data and will update the CSV file with the new data:

tag=web url=* earliest=-15m
| eval list="mozilla"
| `ut_parse_extended(url,list)`
| stats earliest(_time) as earliest latest(_time) as latest by ut_domain
| inputlookup append=t previously_seen_domains.csv
| stats min(earliest) as earliest max(latest) as latest by ut_domain
| outputlookup previously_seen_domains.csv

The key here is the append=t command. (use google 🙂

View solution in original post

0 Karma

Veeruswathi
Explorer

Thankyou for you reply Azeemering.

Should i run the base-search to create the lookup for all time only once?? or should i save it somewhere.(If yes, as what?)

Thanks,
Swathi

0 Karma

Azeemering
Builder

Allright, so you want to create an inputlookup file I assume.

First create the inputlookup by running a base search/cache and write it to a csv file

Example:

tag=web url=*
| eval list="mozilla"
| `ut_parse_extended(url,list)`
| stats earliest(_time) as earliest latest(_time) as latest by ut_domain
| outputlookup previously_seen_domains.csv

Once you have the baseline, we can create a search that compares the data and will update the CSV file with the new data:

tag=web url=* earliest=-15m
| eval list="mozilla"
| `ut_parse_extended(url,list)`
| stats earliest(_time) as earliest latest(_time) as latest by ut_domain
| inputlookup append=t previously_seen_domains.csv
| stats min(earliest) as earliest max(latest) as latest by ut_domain
| outputlookup previously_seen_domains.csv

The key here is the append=t command. (use google 🙂

0 Karma
Get Updates on the Splunk Community!

Building Reliable Asset and Identity Frameworks in Splunk ES

 Accurate asset and identity resolution is the backbone of security operations. Without it, alerts are ...

Cloud Monitoring Console - Unlocking Greater Visibility in SVC Usage Reporting

For Splunk Cloud customers, understanding and optimizing Splunk Virtual Compute (SVC) usage and resource ...

Automatic Discovery Part 3: Practical Use Cases

If you’ve enabled Automatic Discovery in your install of the Splunk Distribution of the OpenTelemetry ...