Splunk Search

How does throttle work

lucas4394
Path Finder

I wonder how the throttling works if the last pipeline of the search is to redirect the results to different tools/software such as send the results to ticketing systems. I got the repeat events in the ticketing system although the content in the throttling field was the same.

Any clues? Thanks.

Sample search and the field1 is the throttling field:

blah blah ...
| eval field1=fieldx.last_report_time
| table field1 field2 field3, field4
| sendResultToTicket
0 Karma
1 Solution

aberkow
Builder

My understanding of throttling is that it prevents alert actions from being triggered. Since a pipe command is still part of the search, I would guess that throttling would have no effect on preventing | sendResultToTicket from occurring, because the search hasn't completed yet and can't be throttled. I think this because the front end says After an alert is triggered, subsequent alerts will not be triggered until after the throttle period, but that doesn't say "they aren't run".

As a suggestion on what you can do - create a csv that holds all the tickets you've "sendResultToTicketed", and add a search clause to blacklist those that have been created already. Then, for those that aren't blacklisted yet, run them through sendResultToTicket, then add them to the blacklist.

blah blah ...
 | eval field1=fieldx.last_report_time
 | table field1 field2 field3, field4, ticket
 | search NOT [ |inputlookup ticketCsv.csv | table ticket ]
 | sendResultToTicket
| outputlookup append=t ticketCsv.csv 

Hope this helps!

View solution in original post

TheWoodRanger
Explorer

To add to @aberkow 's answer - throttling is a mechanism to interrupt the alert actions attached to the search object, meaning throttling rules are applied after a search completes:
Search completes > Check Alert conditions > Run alert actions if condition is true

When the `sendResultToTicket` command is within the SPL of the search, throttling configurations aren't considered at all, you'd need to incorporate logic that avoids executing that command if the ticket already exists, etc.

For your case, you'd need to set up `sendResultToTicket` as an alert action itself that executes on each result in order to utilize throttling (pointed to specific fields) as a means to avoid running the same action on the same set of result fields.

To update his answer a bit, the other way is to use in-SPL deduplication/throttling logic.

You can preserve the original output by using appendpipe instead of filtering all the results matching the lookup. Appendpipe runs a subsearch on the current results in a nested scope, which lets you run commands and filter without affecting the outer search results (appendpipe results will be appended if you don't end with `| where false()` )

I also changed the filter subsearch to use stats values() instead of a | table output to avoid subsearch limits of 50k.

blah blah ...
 | eval field1=fieldx.last_report_time
 | table field1 field2 field3, field4, ticket
 | appendpipe 
    [| search NOT ticket IN [ 
        | inputlookup ticketCsv.csv 
        | search <desired condition, ie status="Open"> 
        | stats values(ticket) AS ticketIDsToFilter 
        | rename ticketIDsToFilter AS search 
        | eval search = "(\"" . mvjoin(search, "\", \"") . "\")" ]
    | sendResultToTicket
    | outputlookup append=t ticketCsv.csv 
    | where false() ]

 

0 Karma

aberkow
Builder

My understanding of throttling is that it prevents alert actions from being triggered. Since a pipe command is still part of the search, I would guess that throttling would have no effect on preventing | sendResultToTicket from occurring, because the search hasn't completed yet and can't be throttled. I think this because the front end says After an alert is triggered, subsequent alerts will not be triggered until after the throttle period, but that doesn't say "they aren't run".

As a suggestion on what you can do - create a csv that holds all the tickets you've "sendResultToTicketed", and add a search clause to blacklist those that have been created already. Then, for those that aren't blacklisted yet, run them through sendResultToTicket, then add them to the blacklist.

blah blah ...
 | eval field1=fieldx.last_report_time
 | table field1 field2 field3, field4, ticket
 | search NOT [ |inputlookup ticketCsv.csv | table ticket ]
 | sendResultToTicket
| outputlookup append=t ticketCsv.csv 

Hope this helps!

Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Tech Talk Recap | Mastering Threat Hunting

Mastering Threat HuntingDive into the world of threat hunting, exploring the key differences between ...

Observability for AI Applications: Troubleshooting Latency

If you’re working with proprietary company data, you’re probably going to have a locally hosted LLM or many ...

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...