Splunk Search

Set the index with a field when using collect command

ejwade
Contributor

Hello!

I'm looking to set the index parameter of the collect command with the value of a field from each event.

Here's an example.

 

 

| makeresults count=2
| streamstats count
| eval index = case(count=1, "myindex1", count=2, "myindex2")
| collect index=index testmode=true

 

 

This search creates two events. Both events have the index field, one with "myindex1" as the value, and the other with "myindex2". I would like to use these values to set the index in the collect command.

Labels (1)
0 Karma
1 Solution

ejwade
Contributor

After tooling with it more, I think the best approach uses the map command.

| makeresults count=2
| streamstats count
| eval index = case(count=1, "myindex1", count=2, "myindex2")
| outputlookup lookup_of_events
| stats
    count
    by index
| map report_to_map_through_indexes

report_to_map_through_indexes

| inputlookup lookup_of_events
    where index="$index$"
| collect index="$index$"

View solution in original post

0 Karma

ejwade
Contributor

After tooling with it more, I think the best approach uses the map command.

| makeresults count=2
| streamstats count
| eval index = case(count=1, "myindex1", count=2, "myindex2")
| outputlookup lookup_of_events
| stats
    count
    by index
| map report_to_map_through_indexes

report_to_map_through_indexes

| inputlookup lookup_of_events
    where index="$index$"
| collect index="$index$"
0 Karma

bowesmana
SplunkTrust
SplunkTrust

It can be done with map, but the phrase best approach uses the map command is not a phrase that would normally be used when considering the map command. As @PickleRick indicates, it has to be used carefully.

In your pseudo example it's fine, but with real data remember that each result will initiate a new run of the saved search - if you have lots of results, as this runs collect for EACH and every row, it can place significant additional load on the server - and by default it will only run 10 iterations.

 

PickleRick
SplunkTrust
SplunkTrust

On top of that your use might simply be restricted from using such commands. And your dashboards may not run if powered by risky commands.

https://docs.splunk.com/Documentation/Splunk/latest/Security/SPLsafeguards

PickleRick
SplunkTrust
SplunkTrust

Be aware that map is a potentially unsafe command.

Also your approach with both map and an intermediate lookup seems strange. That's what passing fields to the subsearch is for.

 

ejwade
Contributor

The lookup reduces the iterations of the map command. In a real world scenario, I have a field called "dept" that lists one of ten departments for each result. The map command only needs to iterate through each one (ten times total), so the output lookup saves off the data, then the stats separates each dept, and the map iterates through.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

You can't. Even with output_format=hec you can specify some metadata fields like source or sourcetype (which can affect your license usage) but the destination index has to be provided explicitly with the collect command invocation.

danspav
SplunkTrust
SplunkTrust

Hi @ejwade,

I'm with @bowesmana on this - I don't think it's possible to run | collect with multiple index locations.

You could do this instead:

| makeresults count=2
| streamstats count
| eval index = case(count=1, "myindex1", count=2, "myindex2")
| appendpipe[| search index="myindex1"| collect index=myindex1]
| appendpipe[| search index="myindex2"| collect index=myindex2]


You will need an appendpipe command for each index you want to export to, but you should know the destination indexes in advance anyway.


bowesmana
SplunkTrust
SplunkTrust

I don't believe it is possible to do - you can in theory do this

index=_audit
| head 1
| eval message="hello"
| table user action message
| collect testmode=f [ | makeresults | fields - _time | eval index="main" | format "" "" "" "" "" ""]

but you would need for the subsearch to know the index to select and that is run before the outer search, so you can't do what you are trying to do

0 Karma
Get Updates on the Splunk Community!

What's New in Splunk Enterprise 9.4: Features to Power Your Digital Resilience

Hey Splunky People! We are excited to share the latest updates in Splunk Enterprise 9.4. In this release we ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...

SignalFlow: What? Why? How?

What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth ...